Merge lp://staging/~xfactor973/charms/trusty/ceph-osd/coordinated-upgrade into lp://staging/~openstack-charmers-archive/charms/trusty/ceph-osd/next
Proposed by
Chris Holcombe
Status: | Needs review |
---|---|
Proposed branch: | lp://staging/~xfactor973/charms/trusty/ceph-osd/coordinated-upgrade |
Merge into: | lp://staging/~openstack-charmers-archive/charms/trusty/ceph-osd/next |
Diff against target: |
600 lines (+387/-18) 5 files modified
.bzrignore (+1/-0) hooks/ceph.py (+155/-9) hooks/ceph_hooks.py (+199/-5) hooks/utils.py (+31/-3) templates/ceph.conf (+1/-1) |
To merge this branch: | bzr merge lp://staging/~xfactor973/charms/trusty/ceph-osd/coordinated-upgrade |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
James Page | Needs Fixing | ||
Chris MacNaughton | Pending | ||
Review via email: mp+287376@code.staging.launchpad.net |
Description of the change
This patch allows the ceph osd cluster to upgrade themselves one by one. It does this by using the ceph monitor cluster as a locking mechanism. There are most likely edge cases with this method that I haven't thought of. Consider this code to be lightly tested. It worked fine on ec2.
To post a comment you must log in.
Unmerged revisions
- 70. By Chris Holcombe
-
Add back in monitor pieces. Will separate out into another MP
- 69. By Chris Holcombe
-
Hash the hostname instead of the ip address. That is more portable. Works now on lxc and also on ec2
- 68. By Chris Holcombe
-
Merge upstream
- 67. By Chris Holcombe
-
It rolls!. This now upgrades and rolls the ceph osd cluster one by one
Note: I put the helpers up for review on charmhelpers: https:/ /code.launchpad .net/~xfactor97 3/charm- helpers/ ceph-keystore/ +merge/ 287205