Merge lp://staging/~hazmat/pyjuju/unit-stop into lp://staging/~juju/pyjuju/docs
- unit-stop
- Merge into docs
Status: | Work in progress | ||||
---|---|---|---|---|---|
Proposed branch: | lp://staging/~hazmat/pyjuju/unit-stop | ||||
Merge into: | lp://staging/~juju/pyjuju/docs | ||||
Diff against target: |
143 lines (+139/-0) 1 file modified
source/drafts/stopping-units.rst (+139/-0) |
||||
To merge this branch: | bzr merge lp://staging/~hazmat/pyjuju/unit-stop | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Mark Mims | Pending | ||
Review via email:
|
Commit message
Description of the change
A spec for how to cleanly stop agents.
Defines an additional state protocol for stopping services along with topology
support for recording user actions.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
William Reade (fwereade) wrote : | # |
couple of typos and some vague rambling thoughts; I'll try to write them
up in some more detail for the list.
https:/
File source/
https:/
source/
This example can be extended to
s/its/it's/
https:/
source/
[See Topology Internals]
Please define "relevant entities". When stopping a service, for example,
do you mean we should mark all the following:
- all units of that service
- all unit relations of that service
- all machines running only that service
- the service itself
?
I think it'd be easier to think of things cascading a bit more -- so
individual entities detected their own impending death by watching for
changes to their immediate parents' statuses... but I may have to think
it through a little more.
https:/
source/
file
Not sure about this; can't we just always write agents with a "normal
exit 0" stanza and depend on that?
Otherwise, if we get a "weird" failure at this precise point, we won't
get an opportunity to start up again and tidy ourselves up nicely; and
besides it seems more correct to me to let the machine agent *always*
tidy up the unit agent upstart files it originally created (which it
will have to do anyway if it times out...).
https:/
source/
s/exists/exits with a code indicating to upstart that it should not be
restarted/
(?)
https:/
source/
'stop' node deletion, shutsdown the units container.
s/shutsdown/shuts down/
But regardless, I think we need to consider this in a little more detail
-- we're not guaranteed to have a container, after all, and we probably
shouldn't be killing whole containers if we're just shutting down a
subordinate charm, for example.
https:/
source/
this.
If the PA kills a machine, it will then be responsible for cleaning up
the MA's nodes *and* any unit nodes? This doesn't seem quite right...
sounds like a job for the GC really (because -- I *think* -- it's going
to have to deal with unit cleanup as a consequence of forcible machine
termination. (If the MA doesn't handle a stop request properly, probably
it didn't stop its units either.).
https:/
source/
footprint and zk state.
I think t...
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Kapil Thangavelu (hazmat) wrote : | # |
Thanks for the review!
http://
File source/
http://
source/
[See Topology Internals]
On 2012/03/17 12:20:30, fwereade wrote:
> Please define "relevant entities". When stopping a service, for
example, do you
> mean we should mark all the following:
> - all units of that service
> - all unit relations of that service
> - all machines running only that service
> - the service itself
> ?
> I think it'd be easier to think of things cascading a bit more -- so
individual
> entities detected their own impending death by watching for changes to
their
> immediate parents' statuses... but I may have to think it through a
little more.
Good point, for a service it would set the appropriate value for all the
units. I like the notion of a children watching parents, but its not
clear its nesc., but ultimately its the assigned machine's
responsibility to shutdown the unit, and it seems just as well.
http://
source/
file
On 2012/03/17 12:20:30, fwereade wrote:
> Not sure about this; can't we just always write agents with a "normal
exit 0"
> stanza and depend on that?
> Otherwise, if we get a "weird" failure at this precise point, we won't
get an
> opportunity to start up again and tidy ourselves up nicely; and
besides it seems
> more correct to me to let the machine agent *always* tidy up the unit
agent
> upstart files it originally created (which it will have to do anyway
if it times
> out...).
That sounds good, although a wierd failure might just as well exit 0 ;-)
But yeah.. the parent should perform final cleanup, and i like the
upstart stanza approach.
http://
source/
'stop' node deletion, shutsdown the units container.
On 2012/03/17 12:20:30, fwereade wrote:
> s/shutsdown/shuts down/
> But regardless, I think we need to consider this in a little more
detail --
> we're not guaranteed to have a container, after all, and we probably
shouldn't
> be killing whole containers if we're just shutting down a subordinate
charm, for
> example.
a subordinate is handled slightly differently, same protocol different
parent. a subordinate is cleaned up by its parent/primary unit. most of
the mechanics of container vs. not are already abstracted out behind
unit deploy factories.
http://
source/
this.
On 2012/03/17 12:20:30, fwereade wrote:
> If the PA kills a machine, it will then be responsible for cleaning up
the MA's
> nodes *and* any unit nodes? This doesn't seem quite right... sounds
like a job
> for the GC really (because -- I *think* -- it's going to have to deal
with unit
> cleanup...
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Kapil Thangavelu (hazmat) wrote : | # |
Please take a look.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Gustavo Niemeyer (niemeyer) wrote : | # |
Nice to see this moving forward. Some comments:
https:/
File source/
https:/
source/
Sorry for nitpicking, but can we please stick to a consistent convention
for headers? I believe that's what we have today:
Top level: ===
Inner level: ---
https:/
source/
termination of errant children.
This is a very dense paragraph. Can it be simplified to something like:
"""
Juju records the desired state in the topology, and preserves that state
while the actions to accomplish it are still taking place.
"""
https:/
source/
worked example going through a unit
"How it works, what follows"?
https:/
source/
topology as a key 'action' value 'destroy'
The "action: destroy" terminology feels a bit out of place here. This is
pretty much a procedure call, but the topology holds state instead.
Something like "destroyed: true" would be more appropriate. It remains
true even after the action has taken place.
https:/
source/
/units/unit-x/stop / sets watch on /units/unit-x/stop and timer.
See below regarding the watch side of it.
https:/
source/
This should be done after departing from relations.
https:/
source/
/units/unit-x/stop to reflect the action has been performed.
That step doesn't seem necessary. We can reuse the existing liveness
mechanism that is already supported by the agent. When the agent dies,
the machine will know it.
It also seems like a nice idea to have the stop ZooKeeper node existing
for as long as that's the intended state of the unit. We have to need
that node on startup for watching it anyway.
https:/
source/
'stop' node deletion, shuts down the unit, removing
Also touches the above point.
https:/
source/
footprint and zk state.
What about garbage collecting on the next action after the decision that
it's fine to GC it? I don't think we need a specific process that has
the task of GCing the topology.
...
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Kapil Thangavelu (hazmat) wrote : | # |
https:/
File source/
https:/
source/
On 2012/03/21 20:07:53, niemeyer wrote:
> Sorry for nitpicking, but can we please stick to a consistent
convention for
> headers? I believe that's what we have today:
> Top level: ===
> Inner level: ---
done.
https:/
source/
termination of errant children.
On 2012/03/21 20:07:53, niemeyer wrote:
> This is a very dense paragraph. Can it be simplified to something
like:
> """
> Juju records the desired state in the topology, and preserves that
state while
> the actions to accomplish it are still taking place.
> """
sounds good.
https:/
source/
worked example going through a unit
On 2012/03/21 20:07:53, niemeyer wrote:
> "How it works, what follows"?
removed
https:/
source/
topology as a key 'action' value 'destroy'
On 2012/03/21 20:07:53, niemeyer wrote:
> The "action: destroy" terminology feels a bit out of place here. This
is pretty
> much a procedure call, but the topology holds state instead. Something
like
> "destroyed: true" would be more appropriate. It remains true even
after the
> action has taken place.
action here was a really a name for a coordinated state transition, ie.
an implied final state. recording the state instead of the activity
sounds good. i have a minor preference for a generic key over per state
keys, as i can't think of any multi-valued concurrent activities/states
here, but not guessing via constraints in the face of ambiguity does
seem to be the wiser choice.
https:/
source/
On 2012/03/21 20:07:53, niemeyer wrote:
> This should be done after departing from relations.
definitely
https:/
source/
/units/unit-x/stop to reflect the action has been performed.
On 2012/03/21 20:07:53, niemeyer wrote:
> That step doesn't seem necessary. We can reuse the existing liveness
mechanism
> that is already supported by the agent. When the agent dies, the
machine will
> know it.
> It also seems like a nice idea to have the stop ZooKeeper node
existing for as
> long as that's the intended state of the unit. We have to need that
node on
> startup for watching it anyway.
i wanted to avoid presence node usage, due its known problems with
session expiration. ie if an agent has a restart in the middle of the
stop process, the machine agent watch may erroneously think the agent...
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Kapil Thangavelu (hazmat) wrote : | # |
https:/
File source/
https:/
source/
footprint and zk state.
On 2012/03/26 00:42:07, hazmat wrote:
> On 2012/03/21 20:07:53, niemeyer wrote:
> > What about garbage collecting on the next action after the decision
that it's
> > fine to GC it? I don't think we need a specific process that has the
task of
> > GCing the topology.
> >
> > In both cases, though, we have an issue: when is it ok to GC it?
This proposal
> > is not providing any means for deciding on that, which means the
data becomes
> > eternal. That hole indicates postponing that side of the problem may
not be a
> > good idea. We can do something simple, but it'd be good implement
cleaning up
> at
> > the same time.
> There is some cost associated to doing gc every user action, its n
roundtrips
> for the client to check. I agree it is simpler for user-gc, but there
are other
> forms of garbage that could be opportunistically collected with an
agent (ie.
> old /relations, non topology referenced service nodes) Its not clear
that
> garbage will be a problem in practice though outside of the topology,
so i'm
> fine with the user initiated gc confined to topology intents.
i'm still a bit concerned about the lag here. Take a service with 10
units, if we destroy the service, we end up recording destroy against
the 10 units for their assigned machine coordination and the service.
I suppose that goes away if we just record the action once against the
service, and have machine agents watching all their assigned service
unit's service in the topology.
In either case, for the next user interaction, we have several
roundtrips queued for the next user interaction one per unit to verify
their completion/
lag, esp. as the activity is an ongoing one that may take several
minutes (executing the stop hook) and thus require gc checking for the
next several user commands. And in a multi-user usage, the user who
finally deleted the object from the topology would be random, which also
feels questionable from an audit perspective.
Given that, i'd prefer the dedicated gc agent.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Kapil Thangavelu (hazmat) wrote : | # |
Please take a look.
https:/
File source/
https:/
source/
footprint and zk state.
On 2012/03/21 20:07:53, niemeyer wrote:
> What about garbage collecting on the next action after the decision
that it's
> fine to GC it? I don't think we need a specific process that has the
task of
> GCing the topology.
> In both cases, though, we have an issue: when is it ok to GC it? This
proposal
> is not providing any means for deciding on that, which means the data
becomes
> eternal. That hole indicates postponing that side of the problem may
not be a
> good idea. We can do something simple, but it'd be good implement
cleaning up at
> the same time.
wrt to when is it OK to GC it, i've added an additional section to the
document regarding when its okay.
- 6. By Kapil Thangavelu
-
restore stop node creation to supervisory agent, too many races otherwise, the topo is the source of truth
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Gustavo Niemeyer (niemeyer) wrote : | # |
https:/
File source/
https:/
source/
'stop' node deletion, shuts down the unit, removing
On 2012/03/21 20:07:53, niemeyer wrote:
> Also touches the above point.
This one is still pending.
https:/
source/
footprint and zk state.
On 2012/03/26 00:59:22, hazmat wrote:
> i'm still a bit concerned about the lag here.
(...)
> Given that, i'd prefer the dedicated gc agent.
Sounds good, but let's please not swipe the problem under the carpet. If
we'll have a dedicated GC, let's implement it upfront at the same time
we implement the garbage itself. It may be an additional command on
juju-admin, for example, that is called periodically on the zookeeper
machine.
While implementing GC, we may figure details about the implementation
that might be best done differently, so we shouldn't be tying ourselves
to an implementation and postponing the decision on how to GC to a
latter point.
https:/
File source/
https:/
source/
'destroy' key with a True value
Thanks. Please just s/destroy/
an action to be carried rather than a state.
https:/
source/
will trigger gc activities against
New and unfinished sentence.
https:/
source/
stop node.
Should be "no presence node and existent stop node."
https:/
source/
stop node.
Ditto.
https:/
source/
unit, service), its best to treat them
s/its/it's/
Unmerged revisions
- 6. By Kapil Thangavelu
-
restore stop node creation to supervisory agent, too many races otherwise, the topo is the source of truth
- 5. By Kapil Thangavelu
-
address comments, add section on gc conditions/actions
- 4. By Kapil Thangavelu
-
address comments
- 3. By Kapil Thangavelu
-
merge trunk
- 2. By Kapil Thangavelu
-
stopping units spec
Reviewers: mp+98035_ code.launchpad. net,
Message:
Please take a look.
Description:
A spec for how to cleanly stop agents.
Defines an additional state protocol for stopping services along with
topology
support for recording user actions.
https:/ /code.launchpad .net/~hazmat/ juju/unit- stop/+merge/ 98035
(do not edit description out of merge proposal)
Please review this at https:/ /codereview. appspot. com/5847053/
Affected files: drafts/ stopping- units.rst
A [revision details]
A source/