Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubemacpool-mac-controller-manager going into CrashLoopBackOff when creating a new HCO deployment #313

Closed
ezio-auditore opened this issue Jun 11, 2021 · 3 comments

Comments

@ezio-auditore
Copy link

What happened:
I was trying out the latest hco build kubevirt / hyperconverged-cluster-index:1.5.0-unstable but something seems to be wrong with the kubemacpool-mac-controller-manager. The pod doesn't come up and stays in CrashLoopBackOff
What you expected to happen:
kubemacpool-mac-controller-manager should have been up and the kubervirt hco should be in succeeded state
How to reproduce it (as minimally and precisely as possible):
Have openshift 4.8.0-fc.0 installed on 3 bare metals.
Followed this instruction to install unreleased builds:- https://github.com/kubevirt/hyperconverged-cluster-operator#installing-unreleased-bundle-using-a-custom-catalog-source 1.5.0-unstable
Created the following
CatalogSource
Namespace
OperatorGroup
Subscription
HCO cr: https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/hco.cr.yaml
Anything else we need to know?:
Have the following logs from kubemacpool-mac-controller-manager

kubemacpool-cert-manager-6557fb8648-wj5cc-manager.log
kubemacpool-mac-controller-manager-7df76694f-tg9bb-manager.log

Environment:
Kubernetes version (use kubectl version):
Client Version: 4.7.7
Server Version: 4.8.0-fc.0
Kubernetes Version: v1.21.0-rc.0+fde4aa9

Hardware configuration:
3 baremetals
Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
16/16 cores; 32 threads 256 GB memory
OS (e.g. from /etc/os-release):
cat /etc/os-release
NAME="Red Hat Enterprise Linux CoreOS"
VERSION="48.84.202104151145-0"
VERSION_ID="4.8"
OPENSHIFT_VERSION="4.8"
RHEL_VERSION="8.4"
PRETTY_NAME="Red Hat Enterprise Linux CoreOS 48.84.202104151145-0 (Ootpa)"
ID="rhcos"
ID_LIKE="rhel fedora"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::coreos"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="OpenShift Container Platform"
REDHAT_BUGZILLA_PRODUCT_VERSION="4.8"
REDHAT_SUPPORT_PRODUCT="OpenShift Container Platform"
REDHAT_SUPPORT_PRODUCT_VERSION="4.8"
OSTREE_VERSION='48.84.202104151145-0'
Kernel (e.g. uname -a):
Linux ********* 4.18.0-293.el8.x86_64 #1 SMP Mon Mar 1 10:04:09 EST 2021 x86_64 x86_64 x86_64 GNU/Linux

@RamLavi
Copy link
Member

RamLavi commented Jun 13, 2021

Hey @ezio-auditore , does this issue occur every time you deploy HCO? did you try more than once?
This information will assist in profiling the problem, so I can try to recreate it.
Also, can you add more detail on the bare-metal you are running this on? it may be helpful.

@pradeep-av
Copy link

@RamLavi Hitting this issue on a cluster-network-addons-operator deployment too. kubemacpool-mac-controller-manager is failing with the same error (requested resource not found)

{"level":"error","ts":1625732361.5342925,"logger":"runKubemacpoolManager","msg":"Failed to run the kubemacpool manager","error":"failed to start pool manager routines: failed Init pool manager maps: failed to init MacPoolMap From Cluster: failed to iterate the cluster vm interfaces to recreate the macPoolMap: failed iterating over all cluster vms: the server could not find the requested resource","errorVerbose":"the server could not find the requested resource\nfailed iterating over all cluster vms\ngithub.com/k8snetworkplumbingwg/kubemacpool/pkg/pool-manager......

@pradeep-av
Copy link

@RamLavi Found issue with my setup, I had an older Kubevirt version. with new one it worked. missed the 0.37.1 min version

@RamLavi RamLavi closed this as completed Nov 15, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants