You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
KRO will create "child" resources, and in theory users can change (or delete) those resources. What should KRO do in this case?
IIUC currently there is a configurable re-reconciliation interval (10 hours?) so KRO will fix/recreate any drift/deletion every 10 hours.
Proposed Solution:
We can watch the child objects, and then we can more rapidly respond to any drift. (Note: we might want to allow users to not correct drift, though I'm not sure whether people want this)
The solution would have to be careful both not to create "too many" watches, and to not get confused by other controllers acting on the same objects. For example, if the status of an object changes, we should not re-reconcile. We need to define whether we want to manage all spec fields or just the ones that user specifies (and be mindful of e.g. Deployment spec.replicas).
If we are using watch, we should also use that for more rapid updating of the status of our "child" resources. So this should improve performance vs polling (are we polling today?)
Alternatives Considered:
We could continue to use polling, or fall back to polling. Watch is an optimization over polling. I think we should focus this issue on the desired behaviour when "child" resources do change / are deleted, rather than how we achieve that. (Or at least, I think they are separable decisions)
Additional Context:
I'm working on applyset in kubectl which may have some overlap.
Please vote on this issue by adding a 👍 reaction to the original issue
If you are interested in working on this feature, please leave a comment
The text was updated successfully, but these errors were encountered:
For my own testing, it seems that the drift-detection is already in place. I tested it by updating resource group and then watching kro update the desired state. sometimes to "accelerate" i need to delete kro pods so that it directly try to reconcile, wonder how long it would wait without the reboot
It seems to try to detect some drift (deltas) already :
kro-7556b99cd8-mh5xg kro 2025-02-21T13:07:29.884Z DEBUG controller.eksclusters No deltas found for resource {"namespace": "cluster1", "name": "cluster1", "resourceID": "argocdSecret"}
not sure for what attributes changes it consider it to be a drift (labels, annotations, specs ?)
Feature Description
Problem Statement:
KRO will create "child" resources, and in theory users can change (or delete) those resources. What should KRO do in this case?
IIUC currently there is a configurable re-reconciliation interval (10 hours?) so KRO will fix/recreate any drift/deletion every 10 hours.
Proposed Solution:
We can watch the child objects, and then we can more rapidly respond to any drift. (Note: we might want to allow users to not correct drift, though I'm not sure whether people want this)
The solution would have to be careful both not to create "too many" watches, and to not get confused by other controllers acting on the same objects. For example, if the status of an object changes, we should not re-reconcile. We need to define whether we want to manage all spec fields or just the ones that user specifies (and be mindful of e.g. Deployment spec.replicas).
If we are using watch, we should also use that for more rapid updating of the status of our "child" resources. So this should improve performance vs polling (are we polling today?)
Alternatives Considered:
We could continue to use polling, or fall back to polling. Watch is an optimization over polling. I think we should focus this issue on the desired behaviour when "child" resources do change / are deleted, rather than how we achieve that. (Or at least, I think they are separable decisions)
Additional Context:
I'm working on applyset in kubectl which may have some overlap.
The text was updated successfully, but these errors were encountered: