-
Notifications
You must be signed in to change notification settings - Fork 1
cp_install
There are currently two scenarios where OneCP is used, please take a look at kustomize templates. An example filled-in template may look like the snippet below:
---
apiVersion: v1
kind: Secret
metadata:
name: cloud-config
namespace: kube-system
stringData:
config.yaml: |
opennebula:
endpoint:
ONE_XMLRPC: "http://10.2.11.40:2633/RPC2"
ONE_AUTH: "oneadmin:password"
publicNetwork:
name: public
privateNetwork:
name: private
virtualRouter:
templateName: capone131-vr
extraContext: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: opennebula-cloud-controller-manager
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:opennebula-cloud-controller-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: opennebula-cloud-controller-manager
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
k8s-app: cloud-controller-manager
name: cloud-controller-manager
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: cloud-controller-manager
template:
metadata:
labels:
k8s-app: cloud-controller-manager
spec:
serviceAccountName: opennebula-cloud-controller-manager
containers:
- name: cloud-controller-manager
image: "ghcr.io/opennebula/cloud-provider-opennebula:v0.0.1"
command:
- /opennebula-cloud-controller-manager
- --cloud-provider=opennebula
- --cluster-name=test
- --cloud-config=/etc/one/config.yaml
- --leader-elect=true
- --use-service-account-credentials
- --controllers=cloud-node,cloud-node-lifecycle,service-lb-controller
volumeMounts:
- name: cloud-config
mountPath: /etc/one/
readOnly: true
volumes:
- name: cloud-config
secret:
secretName: cloud-config
hostNetwork: true
tolerations:
- key: node.cloudprovider.kubernetes.io/uninitialized
value: "true"
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/control-plane: ""
Where important settings are:
-
endpoint/ONE_XMLRPC: "http://10.2.11.40:2633/RPC2"
- OpenNebula cluster endpoint -
endpoint/ONE_AUTH: "oneadmin:password"
- OpenNebula admin credentials -
publicNetwork/name: public
- a "public" OpenNebula VNET to attach the primary VR NIC (eth0) -
privateNetwork/name: private
- a "private" OpenNebula VNET to attach the secondary VR NIC (eth1) -
virtualRouter/templateName: capone131-vr
- VR instance template to use with OneCP -
virtualRouter/extraContext: {}
- can be used to pass extraCONTEXT=[]
variables during VR creation -
image: "ghcr.io/opennebula/cloud-provider-opennebula:v0.0.1"
- public OneCP docker image to deploy -
--cluster-name=test
- "name" of the Kubernetes cluster (this setting is more important in OneCAPI)
A typical VR/VM template looks like:
CONTEXT = [
NETWORK = "YES",
ONEAPP_VNF_DNS_ENABLED = "YES",
ONEAPP_VNF_DNS_NAMESERVERS = "1.1.1.1 8.8.8.8",
ONEAPP_VNF_DNS_USE_ROOTSERVERS = "NO",
ONEAPP_VNF_NAT4_ENABLED = "YES",
ONEAPP_VNF_NAT4_INTERFACES_OUT = "eth0",
ONEAPP_VNF_ROUTER4_ENABLED = "YES",
SSH_PUBLIC_KEY = "$USER[SSH_PUBLIC_KEY]",
TOKEN = "YES" ]
CPU = "1"
DISK = [
IMAGE = "Service Virtual Router" ]
GRAPHICS = [
LISTEN = "0.0.0.0",
TYPE = "VNC" ]
HOT_RESIZE = [
CPU_HOT_ADD_ENABLED = "NO",
MEMORY_HOT_ADD_ENABLED = "NO" ]
LXD_SECURITY_PRIVILEGED = "true"
MEMORY = "512"
MEMORY_RESIZE_MODE = "BALLOONING"
MEMORY_UNIT_COST = "MB"
NIC_DEFAULT = [
MODEL = "virtio" ]
OS = [
ARCH = "x86_64",
FIRMWARE = "",
FIRMWARE_SECURE = "YES" ]
VROUTER = "YES"
Where the Service Virtual Router
image is just a stock VR QCOW2 disk that can be downloaded from OpenNebula Marketplace.
Note
The VR instances that are deployed by OneCP do not require OneGate access, i.e. HAProxy instances are configured with purely static CONTEXT=[]
parameters.
Because VR instances do not currently support NIC alias attachments (multiple floating IPs), OneCP emulates this behavior with an additional Address Range (AR) of type ETHER. Please make sure a single MAC ETHER AR is added to your "public" VNET at index 1, for example:
"AR": [
{
"AR_ID": "0",
"IP": "10.2.11.200",
"MAC": "02:00:0a:02:0b:c8",
"SIZE": "48",
"TYPE": "IP4",
"MAC_END": "02:00:0a:02:0b:f7",
"IP_END": "10.2.11.247",
"USED_LEASES": "0",
"LEASES": {}
},
{
"AR_ID": "1",
"MAC": "02:00:3c:f0:4d:f9",
"SIZE": "16",
"TYPE": "ETHER",
"MAC_END": "02:00:3c:f0:4e:08",
"USED_LEASES": "0",
"LEASES": {}
}
]
You can also specify the exact AR_ID
, for example:
publicNetwork:
name: public
addressRangeID: 3
Download and install kustomize and envsubst.
Create an environment file ./.env
with contents similar to:
CLUSTER_NAME=test
CCM_IMG=ghcr.io/opennebula/cloud-provider-opennebula:v0.0.1
ONE_XMLRPC=http://10.2.11.40:2633/RPC2
ONE_AUTH=oneadmin:password
ROUTER_TEMPLATE_NAME=capone131-vr
PUBLIC_NETWORK_NAME=service
PRIVATE_NETWORK_NAME=private
kustomize build kustomize/default/ | (export `cat .env` && envsubst) | kubectl apply -f-
kustomize build kustomize/oneke/ | (export `cat .env` && envsubst) | kubectl apply -f-