-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathREADME
227 lines (175 loc) · 9.44 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
LINUX FOUNDATION BUILDBOT CONFIG
================================
This information should help people understand and work with the Linux
Foundation's buildbot setup.
Structure
---------
Our configuration controls builds for two projects, including three
different build slave configurations.
One of the projects is currently defunct, and thus not activated:
Moblin. Should it need to be reactivated for some reason, it should
be tested extensively, as a number of things have changed in buildbot,
in our server setup, and so on. At some point, the Moblin build setup
(both master and slave) will likely be removed.
The other project is active: LSB. For LSB, there are two kinds of
builds: regular project builds, and devchk builds. Devchk is
"special", because it is essentially a comparison between what the LSB
provides and what various Linux distributions provide. Regular builds
need to be done with a minimal setup, but devchk builds need to
include a number of extra native build tools to compare with the LSB.
Both devchk and regular builds are controlled with a single buildbot
master for LSB.
Contents
--------
The current buildbot settings and scripts are stored here. There are
currently four types of files here:
- lsb_master.cfg and moblin_master.cfg, the master configuration for
buildbot
- support Python modules for the above
- master-side support scripts, that do things like assemble the
snapshots directory and build repositories
Setup
-----
Setting up the buildbot master is now done via Puppet. Setting up
systems to integrate with Puppet is beyond the scope of this document;
see the Wiki page here for more information:
https://wiki.linuxfoundation.org/en/LSB_Puppet
Master
------
In the LSB's Puppet configuration, add "include buildbot::master" to
the node's definition.
Regular slave setup
-------------------
A regular build slave will need about 8G of disk space for just the
build slave data, plus whatever the underlying system needs.
Slave setup is done via Puppet, just like the build master. To add a
slave via Puppet, simply add "include buildbot::slave" to that node's
Puppet configuration. Once Puppet has had a chance to set up the
slave, be sure and trigger "build-sdk" and "libbat" for that slave's
architecture; this will ensure that all the necessary dependencies for
proper builds are present.
Build slaves contain no permanent configuration, and so can be deleted
and rebuilt at will. This can be done by stopping the build slave
(with "service buildslave stop") and deleting everything under
/opt/buildbot; Puppet will recreate the entire build slave setup and
restart it.
Devchk slave setup
------------------
Devchk slaves are also handled by Puppet, but since they are not tied
to particular architectures, setting them up requires a bit of
configuration on the master as well.
Here is the procedure for setting up a devchk build slave:
- Install a machine (virtual or real) with the preferred distribution
and architecture, and add it to the LSB Puppet setup. A devchk
build slave needs whatever the distro's underlying minimum system
requirements are, plus about 1G disk space for the build slave.
- Choose an identifier for the build slave. It should be unique
among the devchk build slaves, and should give some indication of
what it's running. For example, a Fedora build slave on x86 could
be identified as "fedora-x86".
- Choose a password for the build slave to identify itself to the
master. I recommend using 'pwgen -s 16' to generate a strong
password.
- I also strongly recommend that the password not be stored in the
main puppet configuration in plaintext. We maintain a special
"puppet-lsb-secret" configuration for passwords and the like.
Unfortunately, adding a password here requires someone with admin
access to the master, and may not be possible. If it is, add the
password to the "buildbotpw" module in puppet-lsb-secret as a
variable.
- Add the identifier to the file named "devchk_build_slave_list" in
this project.
- In the Puppet config, in "modules/buildbot/manifests/devchk.pp",
add the identifier and password to the definitions for the
"masteruser" and "masterpw" variables. These are tied to the
distribution name, version, and architecture, separated by dashes.
They must match exactly; you can get the identifiers to match by
running "facter" on your new build slave. For the password, enter
the variable from buildbotpw if you set that, or enter the bare
text password if you could not. Note that "masteruser" expects the
identifier to be prefixed with "devchk-".
- Also in the Puppet config, add the username and password to the
"/opt/buildbot/slave_pwds" definition in
modules/buildbot/manifests/master.pp. Put the new build slave
user/password on its own line, separated by a colon. Use the
buildbotpw variable for the password you defined above if
possible. The most important thing is that the user and password
in the master's slave_pwds should match the user and password
defined above for "masteruser" and "masterpw".
- Finally, add "include buildbot::devchk" for the per-node Puppet
configuration for the build slave.
- Commit the changes, and restart the LSB build master.
- Once the new build slave is completely configured by Puppet and
online, trigger the "build-sdk" job for that build slave, to ensure
that the slave will have a SDK to test against.
Using the configuration
-----------------------
Both masters include an IRC bot and a Web status page that can be used
to trigger builds. See the buildbot documentation for details.
On the LSB side, there is a separate build for each architecture; they
are named for their version control repository and architecture,
separated by a dash. So, for example, the ia64 build of our Python
tests is called "python-test-ia64". For stuff built out of
"packaging", the subdir name from "packaging" is used. The SDK is
built as a single build, called "build-sdk" (plus the arch), and the
appbat is split into two builds, called "libbat" and "appbat".
The LSB side is also special because we have different types of
builds: "production", "development", and "beta". Depending on the
type of build, different versions of the SDK are used to build, and
package versions may get embedded dates.
By default, all builds are "production" builds unless done from the
devel tree, in which case they default to "development". The build
type can be forced by supplying a reason for the build which starts
with either "production:" or "beta:". (It's assumed that development
builds of released products aren't needed.) This ultimately
translates into a "build_type" property on the build, which drives the
other processes in the build.
MultiScheduler
--------------
Because the LSB builder has so many builds, all related in certain
ways, we provide a system for starting multiple builds at once with a
single action. Currently, this takes the form of a spooler directory,
which is watched for spool files describing a build request.
The easiest way to take advantage of the MultiSchedule is with the
provided command-line utility, "start_lsb_build". This utility can
create properly-formatted job files using simple commands. To learn
how to use start_lsb_build, run it with the -h parameter, which will
print a short help message. The utility can auto-submit the job (and
does so by default), or it can be used to create job files that can be
submitted later by copying them to the spool directory.
There are some cases where direct manipulation of spool files is
necessary; for example, when submitting jobs automatically in slightly
different ways depending on the context. The files consist of
multiple lines of property=value pairs. Properties recognized:
branch_name - Name of the bzr branch directory containing the projects
in question. For example, if a project to build is found at
"http://bzr.linuxfoundation.org/lsb/devel/build_env", the branch_name
for this build would be "devel".
projects - Comma-separated list of projects to build. The project
name consists of the builder name in buildbot minus the trailing
architecture identifier. For example, if one of the buildbot builders
to be triggered is "azov-qt3-tests-x86_64", the project would be
"azov-qt3-tests".
architectures - Comma-separated list of architectures to build for.
Using the previous example for "projects", the architecture would be
"x86_64".
devchk_builders - Comma-separated list of devchk builder IDs to build
for. So, for example, to limit a devchk build to just build on the
builder identified as "fedora-x86", this would be "fedora-x86".
tag - the bzr tag name as passed to the "-r" parameter of bzr, minus
the "tag:" specifier.
build_type - the type of build being done. This must be "normal",
"production", or "beta".
A minimal request file must contain the "projects" and "branch_name"
properties. The rest of the properties default to a normal build, for
all architectures.
This utility can trigger devchk builds as well as regular builds. To
do this, set "projects" to "devchk". You can limit which builds get
built with the "devchk_builders" property. Devchk builds are weird in
two ways:
- You cannot specify both devchk and regular builds in the same job
file, because they are tied to completely different things
(architectures vs. devchk build slaves).
- There is no way to trigger the SDK build by itself for devchk build
slaves; the "devchk" project triggers both the SDK and devchk
builds.