-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug: The namespace (downstream object) in the pcluster is getting terminated after creating the APIBinding #24
Comments
Logs at the syncer side when the API bindings are created E0607 20:21:50.997797 1 spec_controller.go:348] kcp-workload-syncer-spec failed to sync {{"" "v1" "secrets"} "1x5b9q93chbfwcsd|edge-2/default-token-8j4sw"}, err: secrets "default-token-8j4sw" not found The shadow namespace in the cluster reappears after deleting the created APIbinding with the following logs: I0608 15:35:27.414759 1 spec_controller.go:299] "queueing GVR" syncTarget.workspace="2rdefjxpbx1yb9rx" syncTarget.name="edge-1" syncTarget.key="6YKBsgxBSSW1XYFm3wkHf97QZEkLyTHV4Pqguq" reconciler="kcp-workload-syncer-spec" key="2rdefjxpbx1yb9rx|edge-1/default-token-7mcwl" gvr="/v1, Resource=secrets" |
@pankajskku as mentioned in the Slack chat, any chance to get a smaller replicable for this? Im just being realistic, as this would require somebody to recreate your setup on two different architectures with the code which is already removed from the main branch. Or is there any change you can try looking yourself at the code and finding which part is responsible for deleting the resources and why? |
/transfer-issue contrib-tmc |
1 similar comment
/transfer-issue contrib-tmc |
Describe the bug
To facilitate the synchronization of Kubernetes objects from the KCP namespace to its corresponding synctarget (in turn in a namespace in the pcluster), I have created placement policies mapping each sync target/location to a KCP namespace.
However, I have encountered an inconsistency in KCP's behavior when creating the API binding on a Macbook (Darwin ARM64, M1 Pro) compared to a Linux VM (AMD64).
On the Linux VM, creating an API binding for the Kubernetes API export results in the termination of the namespace in the provisioned cluster that corresponds to the KCP namespace. Yet, this does not occur on the Macbook (Darwin ARM64) - creating an API binding for Kubernetes resources does not affect the namespace in the provisioned cluster.
Here is the sequence of steps I follow to establish KCP wiring:
env_variables.sh: It has the environment variables for cluster name, workspace, etc.
The inconsistency appears with KCP's behavior during step 4 on the Linux VM (ARM64).
If anyone has encountered a similar issue or has insights to share, your input would be greatly appreciated.
Steps To Reproduce
Expected Behaviour
With KCP v0.11.0 for AMD64 arch. : The namespace (downstream object) in the pcluster that is getting terminated is corresponding to the KCP namespace (upstream object).
With KCP v0.11.0 for Darwin ARM64 arch. : The namespace (downstream object) in the pcluster is not terminated.
Additional Context
The namespace state after creating the API binding
irl@hub:~/pankaj/octopus$ k get ns edge-1 -o yaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
kcp.io/cluster: 2rdefjxpbx1yb9rx
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{},"labels":{"name":"edge-1"},"name":"edge-1"}}
scheduling.kcp.io/placement: ""
creationTimestamp: "2023-06-08T15:10:58Z"
labels:
kubernetes.io/metadata.name: edge-1
name: edge-1
name: edge-1
resourceVersion: "1972"
uid: 92f0dd0b-b1fa-45ef-8d20-5bafd1d11e28
spec:
finalizers:
status:
conditions:
message: No available sync targets
reason: Unschedulable
status: "False"
type: NamespaceScheduled
phase: Active
The namespace state before creating the API binding
irl@hub:~/pankaj/octopus$ k get ns edge-1 -o yaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
kcp.io/cluster: 2rdefjxpbx1yb9rx
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{},"labels":{"name":"edge-1"},"name":"edge-1"}}
scheduling.kcp.io/placement: ""
creationTimestamp: "2023-06-08T15:10:58Z"
labels:
kubernetes.io/metadata.name: edge-1
name: edge-1
state.workload.kcp.io/6YKBsgxBSSW1XYFm3wkHf97QZEkLyTHV4Pqguq: Sync
name: edge-1
resourceVersion: "2033"
uid: 92f0dd0b-b1fa-45ef-8d20-5bafd1d11e28
spec:
finalizers:
status:
conditions:
status: "True"
type: NamespaceScheduled
phase: Active
pankajthorat@Pankajs-MacBook-Pro octopus % cat env_variables.sh
#!/bin/bash
export HUB_CLUSTER_NAME="hub-operator-system"
export CORE_CLUSTER_NAME="core-1"
export EDGE1_CLUSTER_NAME="edge-1"
export EDGE2_CLUSTER_NAME="edge-2"
#export CLUSTER_NAMES=("$HUB_CLUSTER_NAME" "$CORE_CLUSTER_NAME")
export CLUSTER_NAMES=("$HUB_CLUSTER_NAME" "$CORE_CLUSTER_NAME" "$EDGE1_CLUSTER_NAME" "$EDGE2_CLUSTER_NAME")
export WORKSPACE_NAME="octopus"
pankajthorat@Pankajs-MacBook-Pro octopus % cat 4-ws-sync.sh
#!/bin/bash
source env_variables.sh
export KUBECONFIG=.kcp/admin.kubeconfig
kubectl workspace create $WORKSPACE_NAME --enter
#kubectl workspace create-context
for cluster_name in "${CLUSTER_NAMES[@]}"; do
done
for cluster_name in "${CLUSTER_NAMES[@]}"; do
KUBECONFIG=
/.kube/config kubectl config use-context kind-"$cluster_name"/.kube/config kubectl apply -f "$cluster_name".yamlKUBECONFIG=
done
echo "Sleeping for 30 seconds..."
sleep 30
pankajthorat@Pankajs-MacBook-Pro octopus % cat 5-labelsyncer.sh
#!/bin/bash
source env_variables.sh
export KUBECONFIG=.kcp/admin.kubeconfig
for cluster_name in "${CLUSTER_NAMES[@]}"; do
kubectl label synctarget/"$cluster_name" name=st-"$cluster_name" --overwrite
done
pankajthorat@Pankajs-MacBook-Pro octopus % cat 6-ns-loc-pp.sh
#!/bin/bash
source env_variables.sh
export KUBECONFIG=.kcp/admin.kubeconfig
create namespaces
for cluster_name in "${CLUSTER_NAMES[@]}"; do
kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: ${cluster_name}
labels:
name: ${cluster_name}
EOF
done
create new locations
for cluster_name in "${CLUSTER_NAMES[@]}"; do
kubectl apply -f - <<EOF
apiVersion: scheduling.kcp.io/v1alpha1
kind: Location
metadata:
name: location-$cluster_name
labels:
name: location-$cluster_name
spec:
instanceSelector:
matchLabels:
name: st-$cluster_name
resource:
group: workload.kcp.io
resource: synctargets
version: v1alpha1
EOF
done
Delete the default location
kubectl delete location default
create placement policies
for cluster_name in "${CLUSTER_NAMES[@]}"; do
kubectl apply -f - <<EOF
apiVersion: scheduling.kcp.io/v1alpha1
kind: Placement
metadata:
name: pp-$cluster_name
spec:
locationResource:
group: workload.kcp.io
resource: synctargets
version: v1alpha1
locationSelectors:
name: location-$cluster_name
namespaceSelector:
matchLabels:
name: $cluster_name
locationWorkspace: root:$WORKSPACE_NAME
#locationWorkspace: root
EOF
done
#kubectl kcp bind compute root
#kubectl kcp bind compute root:$WORKSPACE_NAME --apiexports=root:$WORKSPACE_NAME:kubernetes
#kubectl delete placements placement-1cgav5jo
pankajthorat@Pankajs-MacBook-Pro octopus % cat 7a-APIBINDING.sh
#!/bin/bash
#kubectl ws .
#kubectl kcp bind compute root:octopus --apiexports=root:octopus:kubernetes
export KUBECONFIG=.kcp/admin.kubeconfig
kubectl ws .
kubectl apply -f - <<EOF
apiVersion: apis.kcp.io/v1alpha1
kind: APIBinding
metadata:
name: bind-kube
spec:
reference:
export:
path: "root:compute"
name: kubernetes
EOF
The text was updated successfully, but these errors were encountered: