Merge lp://staging/~raharper/juju-deployer/populate-first into lp://staging/juju-deployer
Status: | Rejected |
---|---|
Rejected by: | Haw Loeung |
Proposed branch: | lp://staging/~raharper/juju-deployer/populate-first |
Merge into: | lp://staging/juju-deployer |
Diff against target: |
428 lines (+214/-26) (has conflicts) 11 files modified
Makefile (+5/-1) deployer/action/importer.py (+109/-18) deployer/cli.py (+7/-0) deployer/deployment.py (+32/-3) deployer/env/go.py (+27/-0) deployer/env/py.py (+6/-0) deployer/service.py (+14/-1) deployer/tests/test_charm.py (+5/-0) deployer/tests/test_deployment.py (+0/-2) deployer/tests/test_guiserver.py (+8/-1) deployer/tests/test_importer.py (+1/-0) Text conflict in deployer/service.py Text conflict in deployer/tests/test_guiserver.py |
To merge this branch: | bzr merge lp://staging/~raharper/juju-deployer/populate-first |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
juju-deployers | Pending | ||
Review via email: mp+249543@code.staging.launchpad.net |
Description of the change
This branch introduces a new parameter, -P, --populate_first which does the following
1) reverse the order of services during action/
2) for maas placement, juju does not support specifying a maas machine as a --to destination in the 'deploy' command. To work around this issue we do the equivalent RPC call of this cli sequence:
juju add-machine foo.maas
MID=$(juju status | grep -B4 foo.maas | awk -F: '/^ "/ {print $1}')
juju deploy service --to $MID
To enable (2), we implement a new call, add_machine. jujuclient supports add_machine, but it does not enable the Placement parameter that's available in the juju RPC. Instead, we utilize add_machines which accepts a generic MachineParams object. In deployer, we construct the correct (if empty) dictionary for maas machine placement.
3) modify the logic in deploy_services and add_units such that if we're deploying a service with placement and multiple units, we use a new method in importer which will invoke add_machine and wait until said machine is reporting status, afterwards, the machine index value is returned.
The net result is that we will attempt to invoke add_machine for all units that have placement directives first, and then subsequently deploy services or add_units passing in the correct machine id as a placement parameter respectively.
The test-case for this is:
1) maas provider
2) this yaml:
test_placement:
series: trusty
services:
apache2:
branch: lp:charms/apache2
mysql:
branch: lp:charms/mysql
to:
- maas=oil-
wordpress:
branch: lp:charms/wordpress
to:
- maas=oil-
- maas=oil-
- maas=oil-
relations:
- [wordpress, mysql]
We include one unit with no placement (apache2) and we only pass if apache2 isn't provided a unit that's allocated to any other service units with placement directives.
Sometimes you can be randomly lucky if you deploy this without supplying --placement_first; but the only way to ensure it works 100% is by allocating the targeted machines first.
Unmerged revisions
- 150. By Ryan Harper
-
Allow nested placement when target uses maas= placement. Fix up debugging log message during deploy_services.
- 149. By Ryan Harper
-
Modify depoyment.
deploy_ services to bring services with placement and multiple units online to ensure subsequent services which placement can target previously deployed service units. Update get_machine to handle container placement. - 148. By Ryan Harper
-
Fix typo
- 147. By Ryan Harper
-
Fix typo
- 146. By Ryan Harper
-
Remove some unneeded changes.
- 145. By Ryan Harper
-
Remove reversing of list, that's not going to do it, instead update the services sort to move maas= items first, if present
- 144. By Ryan Harper
-
Turns out we don't need git -C, as the _call method runs from within the repo's path, also, older git binaries don't support -C which broke building in precise chroots! Debugging output would have made this a lot easier.
- 143. By Ryan Harper
-
Once more for fun.
- 142. By Ryan Harper
-
Playing around with enviroment in schroot
- 141. By Ryan Harper
-
invoke with bash
sorry missed this due to leave / email / lp issues. i'll have a look later tonight.