Skip to content

Docker plain helm chart #1035

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 45 commits into
base: master
Choose a base branch
from
Open

Docker plain helm chart #1035

wants to merge 45 commits into from

Conversation

segfault16
Copy link

Add helm chart for docker plain

Refers to #1013

Tasks:

  • Updated documentation in docs/modules/... directory
  • Ran tests in <quickstarter>/testdata directory
  • Shared library adaptions

@faust2199
Copy link
Contributor

Having a very strict values.schema.json is probably not going to provide good experience when using helmValues defined from the Jenkinsfile. According to https://helm.sh/docs/topics/charts/ , "the schema is applied to the final .Values object, and not just to the values.yaml file."

Let's say we have the following in the Jenkinsfile:
odsComponentStageRolloutOpenShiftDeployment(context, [ 'selector': "app.kubernetes.io/name=${context.componentId}", 'helmEnvBasedValuesFiles': ['values.env.yaml'], 'helmValues': [ 'foo': 'bar' ] ]

When the above runs on Jenkins, ods-jenkins-shared-library will add two arguments on the command line:
--set foo=bar --set global.foo=bar

Then it will be aborted because additionalProperties are not allowed in both places. Is the user really expected to add two entries in values.schema.json to fix this?

I suggest that we remove the whole values.schema.json file. If the result of rendering is bad, helm upgrade is going to fail and rollback anyway.

@renedupont
Copy link
Member

Having a very strict values.schema.json is probably not going to provide good experience when using helmValues defined from the Jenkinsfile. According to https://helm.sh/docs/topics/charts/ , "the schema is applied to the final .Values object, and not just to the values.yaml file."

Let's say we have the following in the Jenkinsfile: odsComponentStageRolloutOpenShiftDeployment(context, [ 'selector': "app.kubernetes.io/name=${context.componentId}", 'helmEnvBasedValuesFiles': ['values.env.yaml'], 'helmValues': [ 'foo': 'bar' ] ]

When the above runs on Jenkins, ods-jenkins-shared-library will add two arguments on the command line: --set foo=bar --set global.foo=bar

Then it will be aborted because additionalProperties are not allowed in both places. Is the user really expected to add two entries in values.schema.json to fix this?

I suggest that we remove the whole values.schema.json file. If the result of rendering is bad, helm upgrade is going to fail and rollback anyway.

I partially agree, I think there is a middle way though. In the values.schema.json we could set "additionalProperties": false to true in the relevant parts. That way we validate everything that is set in that file and everything else will not be considered and not lead to failing the helm validation.

serverhorror
serverhorror previously approved these changes Feb 7, 2025
Copy link
Contributor

@jafarre-bi jafarre-bi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few new comments and some conversations still open.

@@ -20,7 +20,10 @@ odsComponentPipeline(
*/
odsComponentStageBuildOpenShiftImage(context)
}
odsComponentStageRolloutOpenShiftDeployment(context)
odsComponentStageRolloutOpenShiftDeployment(context, [
'selector': "app.kubernetes.io/name=${context.componentId}",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this selector only the name? chart.selectorLables are, as they should, name + instance.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this selector only the name? chart.selectorLables are, as they should, name + instance.

Do you mean https://kubernetes.io/docs/reference/labels-annotations-taints/#app-kubernetes-io-instance? How do we generate the unique postfix like in the example given "mysql-abcxyz"?

Copy link
Contributor

@jafarre-bi jafarre-bi Feb 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@faust2199 Unless there's something I'm missing, the deployment.yaml is using chart.selectorLabels for the selector. This is defined in _helpers.tpl to be name = chart.name and instance = .Release.Name. Whatever we use it must match the actual selectors used in deployments, services...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My questions here are:

  1. Is the chart name always equal to the componentId?
  2. Will the functionality of getting resources based only on the name label never return resources that shouldn't match?
  3. Is it correct to specify this selector in the deployment mean when the actual selector used is different? (For this I would answer no)

autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Release Manager doesn't support multiple replicas yet. Not sure if these values should even be configurable, or at least maxReplicas should be 1 by default, with a comment saying that higher values aren't supported.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jafarre-bi I don't know about this limitation, there's nothing in opendevstack documentation about. Please provide the source

Copy link
Contributor

@jafarre-bi jafarre-bi Feb 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@segfault16 HelmDeploymentStrategy, line 198:
// TODO: Once the orchestration pipeline can deal with multiple replicas,
// update this to store multiple pod artifacts.
Even if the code could deal with it and the documentation and traceability would be correctly preserved, it's still not officially supported. We would need to validate the functionality, document it and create specific automated tests to prevent regressions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants