Skip to content
This repository has been archived by the owner on Feb 29, 2024. It is now read-only.

Come up with an automated end-to-end BonnyCI test #178

Open
gandelman-a opened this issue Mar 16, 2017 · 0 comments
Open

Come up with an automated end-to-end BonnyCI test #178

gandelman-a opened this issue Mar 16, 2017 · 0 comments
Assignees
Labels

Comments

@gandelman-a
Copy link

gandelman-a commented Mar 16, 2017

Validating the BonnyCI system is functional from an end-to-end, user POV, I often find myself:

  • Creating a fork of a BonnyCI-managed sandbox
  • Committing a change to that fork
  • Proposing that change via a PR from the fork to the BonnyCI-managed sandbox
  • Ensuring check jobs run, pass/fail as expected, and publish logs to the correct place
  • Getting someone to approve the change
  • Ensuring gate jobs run, pass/fail as expected, and publish logs to the correct place
  • Ensuring the patch lands in the upstream BonnyCI-managed sandbox repo

This should be distilled into an automated test case that can test various scenarios of the above. There are some problems to be figured out about how we manage github credentials, sandbox setup, etc. but for the first iteration it might be OK to just lean on a test config that contains info for pre-existing github users and repositories.

The long term goal with this is to a have a small test suite that is portable between BonnyCI environments. In theory we should be able to point it at any repos and let it run, and validate that the BonnyCI environment managing those repos is functional. This can be used in CI and QA to validate test environments, in operations to validate a new production environment does what its supposed to, and used in monitoring to periodically test the health of a BonnyCI environment. Note that this is not a test of hoist itself, but of the running thing that hoist produces.

@gandelman-a gandelman-a changed the title Come up with an automated end-to-end hoist test Come up with an automated end-to-end BonnyCI test Mar 16, 2017
@gandelman-a gandelman-a self-assigned this Mar 30, 2017
gandelman-a added a commit to gandelman-a/hoist that referenced this issue May 4, 2017
Until we can run this as a periodic job in Zuul and get its logs
published into logstash, run it as an ansible task from the bastion.

This essentially just sets up the task, a user and passes through some
secrets.  The test suite itself contains a playbook, which the ansible-runner
task calls, to convert the secrets into a test config and run the test suite.

The datadog monitor should be able to monitor for the runner task and
report on its failure.

Closes-Issue: BonnyCI/projman#178

Signed-off-by: Adam Gandelman <adamg@ubuntu.com>
gandelman-a added a commit to gandelman-a/hoist that referenced this issue May 4, 2017
Until we can run this as a periodic job in Zuul and get its logs
published into logstash, run it as an ansible task from the bastion.

This essentially just sets up the task, a user and passes through some
secrets.  The test suite itself contains a playbook, which the ansible-runner
task calls, to convert the secrets into a test config and run the test suite.

The datadog monitor should be able to monitor for the runner task and
report on its failure.

Closes-Issue: BonnyCI/projman#178

Signed-off-by: Adam Gandelman <adamg@ubuntu.com>
gandelman-a added a commit to gandelman-a/hoist that referenced this issue May 4, 2017
Until we can run this as a periodic job in Zuul and get its logs
published into logstash, run it as an ansible task from the bastion.

This essentially just sets up the task, a user and passes through some
secrets.  The test suite itself contains a playbook, which the ansible-runner
task calls, to convert the secrets into a test config and run the test suite.

The datadog monitor should be able to monitor for the runner task and
report on its failure.

Closes-Issue: BonnyCI/projman#178

Signed-off-by: Adam Gandelman <adamg@ubuntu.com>
gandelman-a added a commit to gandelman-a/hoist that referenced this issue May 5, 2017
Until we can run this as a periodic job in Zuul and get its logs
published into logstash, run it as an ansible task from the bastion.

This essentially just sets up the task, a user and passes through some
secrets.  The test suite itself contains a playbook, which the ansible-runner
task calls, to convert the secrets into a test config and run the test suite.

The datadog monitor should be able to monitor for the runner task and
report on its failure.

Closes-Issue: BonnyCI/projman#178

Signed-off-by: Adam Gandelman <adamg@ubuntu.com>
gandelman-a added a commit to gandelman-a/hoist that referenced this issue May 8, 2017
Until we can run this as a periodic job in Zuul and get its logs
published into logstash, run it as an ansible task from the bastion.

This essentially just sets up the task, a user and passes through some
secrets.  The test suite itself contains a playbook, which the ansible-runner
task calls, to convert the secrets into a test config and run the test suite.

The datadog monitor should be able to monitor for the runner task and
report on its failure.

Closes-Issue: BonnyCI/projman#178

Signed-off-by: Adam Gandelman <adamg@ubuntu.com>
gandelman-a added a commit to gandelman-a/hoist that referenced this issue May 8, 2017
Until we can run this as a periodic job in Zuul and get its logs
published into logstash, run it as an ansible task from the bastion.

This essentially just sets up the task, a user and passes through some
secrets.  The test suite itself contains a playbook, which the ansible-runner
task calls, to convert the secrets into a test config and run the test suite.

The datadog monitor should be able to monitor for the runner task and
report on its failure.

Closes-Issue: BonnyCI/projman#178

Signed-off-by: Adam Gandelman <adamg@ubuntu.com>
gandelman-a added a commit to gandelman-a/hoist that referenced this issue May 8, 2017
Until we can run this as a periodic job in Zuul and get its logs
published into logstash, run it as an ansible task from the bastion.

This essentially just sets up the task, a user and passes through some
secrets.  The test suite itself contains a playbook, which the ansible-runner
task calls, to convert the secrets into a test config and run the test suite.

The datadog monitor should be able to monitor for the runner task and
report on its failure.

Closes-Issue: BonnyCI/projman#178

Signed-off-by: Adam Gandelman <adamg@ubuntu.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

1 participant