-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modify triggers and remove dockerdeploy in github workflow file #69
base: develop
Are you sure you want to change the base?
Conversation
I disagree on dockerhub pushes -- whenever I use the docker image, I want the most recent version with the most up-to-date specs. I don't want to have to pull the latest specs from GitHub -- if I'm doing that, I may as well just build the image locally and not bother with the dockerhub version. Having manual deploys means that there is no guarantee that the version on Dockerhub corresponds with the code at that version in the repo. This opens the door to accidental or deliberate sabotage of the image because it's no longer tied to the code in the repository. It's also annoying to have to ask someone with the appropriate permissions to do the push to dockerhub, rather than having it happen automatically when the specs or API change. |
docker_build_and_push: | ||
runs-on: ubuntu-latest | ||
needs: run_tests | ||
if: (github.ref == 'refs/heads/develop' || github.ref == 'refs/heads/master') && github.event_name == 'push' && !contains(github.event.head_commit.message, 'skip_docker_build') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line skips the docker build/push if the commit message contains 'skip_docker_build'. I think it would be more sensible to add documentation about this and have the builds pushed to dockerhub by default.
|
||
Alternatively, set the image name in `scripts/local-build.sh` and run it to build and deploy locally, which may be a lot faster. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment is still valid -- why are you deleting it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's no longer called local-build.sh
, and I don't know whether or not it is faster compared to Github Actions. It was definitely faster compared to the Docker Hub build hook, but we no longer use that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, we no longer set the image name within the script
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there is anything to be gained and lots to be lost by going to a manual update process.
As the person who does most of the updating to docker hub and Rancher, I have not yet found the github action useful and prefer running the script, which is a similar workflow to publishing a package in other systems. At a later point, if we find that we want it in the CI, then we can add this workflow back in (which is why I kept in comments). I would prefer us not having to remember to add an extra keyword in the commit message for the majority of PRs. It is also less code to maintain, as that action does have some complexity. |
Rather than commenting out the workflow, a better approach would be change the logic so that you have to include the keyword if you want the build automatically deployed to dockerhub. Question: are you creating versioned docker images corresponding to each API version? I think if you're just interested in the API, it makes sense to have tagged versions released whenever the API is updated. However, if you also want the latest specs, it is more useful to have the bleeding edge docker image. In my use case, I want the whole bundle (specifically I want to use docker-compose to create an integrated setup of Arango + API with the latest specs). I think we need to add some sort of versioning info to the specs (we have discussed it enough times!), so that both API and specs can be identified as whatever version. |
In the actual environment deploys, we've been keeping separate semvers for the API vs the specs. In this case it's useful to have a separate version for the API, because it determines whether we need to do a service upgrade in Rancher (we don't need an upgrade on spec updates). It's a requirement of Shane's to not have to do a service upgrade in order to do a spec upgrade. Another perspective is that specs are really part of the interface of the API as far as the user is concerned, so that should be one semver. This is a pretty unusual system we have. For your use case, one answer would be to include both the API and spec versions in each image tag, such as "0.0.5-0.0.19". The Rancher deploys could only pay attention to the first semver, which is the API (all specs are pulled from the release). Tests could lock to either versions As for the workflow, a compromise that would actually work well for me is to only enable the dockerhub deploy workflow when pushing to master (which might be better called "deploy"). That way we keep some automatic-ness of it, and I get a more controllable deploy process. |
On further thought, I think it would make most sense for the docker image to only be versioned for the API itself and not be responsible for caching specs. Tests that use the API image can set SPEC_RELEASE_URL. Other codebases that pull in the specs can either use the release or repo URLs. |
This sounds good (including the branch renaming). Related Q: is there currently any automation for creating spec releases and what's the criteria for making a new release? Is it on every new commit to develop? |
The criteria for a spec release would be any modification to Using a git tag might be confusing as the semver of the spec will be different from the API. |
Most of our PRs are not updates to the API, so I don't think we need to automatically run this docker deploy code on every push to
develop
.I like to explicitly run the
scripts/docker_deploy
command to actually update the image on docker hub when needed. I often find the control this provides to be very useful.Also, if we run on the
pull_request_target
event, then people will be able to use their secret tokens from their forks. We were also running the workflow twice on any PR, as it was triggering on both push and PR. So I made it trigger on push only for develop and master, same as what we're doing in index_runner.