Merge lp://staging/~rogpeppe/juju-core/simplify-watchers into lp://staging/~juju/juju-core/trunk

Proposed by Roger Peppe
Status: Work in progress
Proposed branch: lp://staging/~rogpeppe/juju-core/simplify-watchers
Merge into: lp://staging/~juju/juju-core/trunk
Diff against target: 1115 lines (+258/-256)
13 files modified
state/machine.go (+2/-2)
state/machine_test.go (+10/-4)
state/service.go (+8/-8)
state/service_test.go (+24/-22)
state/state.go (+6/-6)
state/state_test.go (+9/-12)
state/unit.go (+6/-6)
state/unit_test.go (+9/-12)
state/watcher.go (+128/-116)
state/watcher/watcher.go (+39/-47)
state/watcher/watcher_test.go (+8/-15)
worker/machiner/machiner.go (+3/-2)
worker/provisioner/provisioner.go (+6/-4)
To merge this branch: bzr merge lp://staging/~rogpeppe/juju-core/simplify-watchers
Reviewer Review Type Date Requested Status
The Go Language Gophers Pending
Review via email: mp+115070@code.staging.launchpad.net

Description of the change

environs/state: simplify watcher implementation

FOR DISCUSSION ONLY

When discussing the firewall code on IRC recently, Frank proposed
some reasonable looking code that looked something like this:

func (m *machine) loop(fw *Firewaller) {
 for change := range m.watcher.Changes() {
  fw.machineUnitChanges <- machineUnitsChange{m, change}
 }
 fw.machineUnitChanges <- machineUnitsChange{m, nil}
}

Unfortunately, the way that watchers work currently, we can't write the
code that way. My counter-proposal looked a bit like this:

func (m *machine) loop(fw *Firewaller) {
 defer m.watcher.Stop()
 for {
  select {
  case <-fw.tomb.Dying():
   return
  case change, ok := <-m.watcher.Changes():
   select {
   case fw.machineUnitChanges <- machineUnitsChange{m, change}:
   case <-fw.tomb.Dying():
    return
   }
   if !ok {
    return
   }
  }
 }
}

With the firewaller and the unit agent, we will be doing a great deal
more watcher multiplexing, so we will see a *lot* of code like the above.

I'm not happy with that - the logic is largely obscured because of the
requirement that we read from the Dying channel on every send
and receive.

I propose here that instead of each watcher having its own Stop method,
we explicitly pass in a stop channel. This means that a watcher that
layers on top of another watcher can *delegate* the stop watching
to its sub-watcher. We make any watcher error available as a
result of the Wait method (name of this method probably wrong!),
which waits for the change channel to be closed before returning
the error.

I *think* this makes the watcher logic easier to reason about, and
it also makes the first piece of code above a viable possibility again
(in fact it would probably send the error down the channel too, like this:

func (m *machine) loop(fw *Firewaller) {
 for change := range m.watcher.Changes() {
  fw.machineUnitChanges <- machineUnitsChange{m, change, nil}
 }
 fw.machineUnitChanges <- machineUnitsChange{m, nil, m.watcher.Wait()}
}
)

Down sides of this proposal:
- it's a little more inconvenient, as you need to make a channel before
passing it to the watcher.
- You're not guaranteed to receive no more events after stopping
a watcher.
- Without a Stop method, the association between the stop channel and
the watcher is more indirect.

I don't think that any of these are significant problems, and I think
that we would benefit greatly over time, but others might beg to differ.

Thoughts? Crack?

https://codereview.appspot.com/6373048/

To post a comment you must log in.
Revision history for this message
Roger Peppe (rogpeppe) wrote :

Reviewers: mp+115070_code.launchpad.net,

Message:
Please take a look.

Description:
environs/state: simplify watcher implementation

FOR DISCUSSION ONLY

When discussing the firewall code on IRC recently, Frank proposed
some reasonable looking code that looked something like this:

func (m *machine) loop(fw *Firewaller) {
 for change := range m.watcher.Changes() {
  fw.machineUnitChanges <- machineUnitsChange{m, change}
 }
 fw.machineUnitChanges <- machineUnitsChange{m, nil}
}

Unfortunately, the way that watchers work currently, we can't write the
code that way. My counter-proposal looked a bit like this:

func (m *machine) loop(fw *Firewaller) {
 defer m.watcher.Stop()
 for {
  select {
  case <-fw.tomb.Dying():
   return
  case change, ok := <-m.watcher.Changes():
   select {
   case fw.machineUnitChanges <- machineUnitsChange{m, change}:
   case <-fw.tomb.Dying():
    return
   }
   if !ok {
    return
   }
  }
 }
}

With the firewaller and the unit agent, we will be doing a great deal
more watcher multiplexing, so we will see a *lot* of code like the
above.

I'm not happy with that - the logic is largely obscured because of the
requirement that we read from the Dying channel on every send
and receive.

I propose here that instead of each watcher having its own Stop method,
we explicitly pass in a stop channel. This means that a watcher that
layers on top of another watcher can *delegate* the stop watching
to its sub-watcher. We make any watcher error available as a
result of the Wait method (name of this method probably wrong!),
which waits for the change channel to be closed before returning
the error.

I *think* this makes the watcher logic easier to reason about, and
it also makes the first piece of code above a viable possibility again
(in fact it would probably send the error down the channel too, like
this:

func (m *machine) loop(fw *Firewaller) {
 for change := range m.watcher.Changes() {
  fw.machineUnitChanges <- machineUnitsChange{m, change, nil}
 }
 fw.machineUnitChanges <- machineUnitsChange{m, nil, m.watcher.Wait()}
}
)

Down sides of this proposal:
- it's a little more inconvenient, as you need to make a channel before
passing it to the watcher.
- You're not guaranteed to receive no more events after stopping
a watcher.
- Without a Stop method, the association between the stop channel and
the watcher is more indirect.

I don't think that any of these are significant problems, and I think
that we would benefit greatly over time, but others might beg to differ.

Thoughts? Crack?

https://code.launchpad.net/~rogpeppe/juju-core/simplify-watchers/+merge/115070

(do not edit description out of merge proposal)

Please review this at https://codereview.appspot.com/6373048/

Affected files:
   A [revision details]
   M state/machine.go
   M state/machine_test.go
   M state/service.go
   M state/service_test.go
   M state/state.go
   M state/state_test.go
   M state/unit.go
   M state/unit_test.go
   M state/watcher.go
   M state/watcher/watcher.go
   M state/watcher/watcher_test.go
   M worker/machiner/machiner.go
   M worker/provisioner/provisioner.go

Revision history for this message
William Reade (fwereade) wrote :

On 2012/07/16 11:06:03, rog wrote:
> Please take a look.

I think I'm still -0 on this, because of the inconvenience and lack of a
Stop method; I feel that a stop channel is not sufficiently expressive
really, because I *like* being able to know that no further changes will
arrive after a Stop, and I'd prefer to keep this somehow. In the
unit-relation-watcher branch I do something a bit like this by passing
new *Tombs to new goroutines (and using those to ensure that I never
send a depart event for a given relation unit until I know its
settings-watching goroutine has stopped, and can be counted on not to be
blocking on a send to updates (which would lead to a spurious
post-depart change event, which would in turn be interpreted as a
re-join)).

So, if we were supplying tombs (or simple wrappers thereof) I'd maybe be
+0, depending on how that made the implementation look.

https://codereview.appspot.com/6373048/

Revision history for this message
Gustavo Niemeyer (niemeyer) wrote :

We discussed this online. I'm -1 on changing these interfaces right now
as well.

https://codereview.appspot.com/6373048/

Unmerged revisions

302. By Roger Peppe

state/eatcher: fix test

301. By Roger Peppe

state: make watchers use explicit stop channel

300. By Roger Peppe

start watcher simplification

299. By William Reade

reorder state watcher types

... so that service-related watchers are all together, and follow the usual
watcher-type; change-type; watcher-implementation convention.

No behaviour changes whatsoever; got an IRC LGTM from rogpeppe.

R=
CC=
https://codereview.appspot.com/6405043

298. By Roger Peppe

environs: add methods for opening and closing ports.

Also add a dummy implementation, and clean up dummy op
send semantics slightly.

R=niemeyer
CC=
https://codereview.appspot.com/6349097

297. By Dave Cheney

all: rename service => worker

As discussed in the 03/07/2012 team meeting, among a poor field of
candidates, worker(s) was the prefered name for the code that runs
inside an agent. eg, the Provisioner worker runs inside the
Provisioning agent.

note: also fixed a gofmt issue in state/ which had been bugging me.

R=niemeyer
CC=
https://codereview.appspot.com/6345084

296. By Roger Peppe

charm: avoid crash when reading charms with simple relation strings.

Fixes bug #1015700

R=niemeyer
CC=
https://codereview.appspot.com/6344105

295. By Aram Hăvărneanu

mstate: refactor tests like in state.

Tests are refactored into multiple suites to be compatible with
state. See issue 6348053. After mstate replaces state I plan to reverse
the dependencies between suites. Right now each suite embeds an UtilSuite
and has to call UtilSuite.SetUpTest resulting in error prone duplicate
code. Keeping the multiple suites as containers for data and embedding
all suites into a StateSuite shared by all tests is a better design. We
will keep tests in separate files.

R=niemeyer
CC=
https://codereview.appspot.com/6381045

294. By Aram Hăvărneanu

mstate: don't use global sequencing for units.

Inside the mstate package, we shouldn't be using the service name
directly as a sequence name, since it kills the whole namespace. In
fact, the sequence for units is a special case, and should live inside
the service itself, since we want it to be garbage collected with the
service, and want it to reset if the service is removed/re-added.

R=niemeyer
CC=
https://codereview.appspot.com/6357071

293. By Aram Hăvărneanu

mstate: removing a node removes associated nodes.

R=niemeyer
CC=
https://codereview.appspot.com/6356060

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
The diff is not available at this time. You can reload the page or download it.

Subscribers

People subscribed via source and target branches