Merge lp://staging/~dave-cheney/juju-core/go-environs-dummy-add-instance-id into lp://staging/~juju/juju-core/trunk
Status: | Work in progress |
---|---|
Proposed branch: | lp://staging/~dave-cheney/juju-core/go-environs-dummy-add-instance-id |
Merge into: | lp://staging/~juju/juju-core/trunk |
Diff against target: |
80 lines (+13/-10) 2 files modified
cmd/juju/cmd_test.go (+4/-4) environs/dummy/environs.go (+9/-6) |
To merge this branch: | bzr merge lp://staging/~dave-cheney/juju-core/go-environs-dummy-add-instance-id |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
The Go Language Gophers | Pending | ||
Review via email: mp+110730@code.staging.launchpad.net |
Description of the change
environs/dummy: add Instances
* PROPOSAL *: Add the facility for dummy to return the actual
environ.Instance objects that were returned. The goal is to be
able to identify if dummy started _your_ instance, not just _an_
instance. This could also be accomplished by returned the string of
instance.Id().
Unmerged revisions
- 228. By Dave Cheney
-
merge from trunk
- 227. By Dave Cheney
-
wip
- 226. By Dave Cheney
-
environs: add the facility to query all running instances
The PA needs to be able to find out (with some degree of accuracy)
all the instance id's known to the Environ.R=rog, niemeyer
CC=
https://codereview. appspot. com/6307071 - 225. By William Reade
-
Add tomb to Pinger
I don't recall why we agreed that Stop/Kill should not return errors, or why I
was asked not to use a tomb in the original implementation, but I'm going to
need to know when a Pinger is dying soon; and this will also allow us to check
pinger sanity in the unit-relation-watcher CL. R=rog, niemeyer
CC=
https://codereview. appspot. com/6298082 - 224. By Roger Peppe
-
state: implement Machine.WatchUnits
R=niemeyer
CC=
https://codereview. appspot. com/6299082 - 223. By William Reade
-
add unitRelationWat
cher/unitRelati onChange types In python, we track settings node version rather than settings node content
as we do here. This has been changed because we want to be able to run hooks
in the contexts that caused those specific hooks to run -- including
settings for all relation members at that point in time; and this
consequently frees us from having to worry about what happens when a hook
tries to get the settings of a unit relation that *was* joined when the
hook was executed but has subsequently been deleted. It's also beneficial
in that it only communicates changes when the actual content has changed,
rather than every time the node is written.Note that it will still be perectly possible to implement a hook scheduler
that collapses multiple queued events from the same unit, as is done in
python, so we shouldn't be in danger of needing to store a ridiculous excess
of historical events.R=niemeyer
CC=
https://codereview. appspot. com/6305082 - 222. By William Reade
-
ChildrenWatcher now treats missing nodes as having no children
This is consistent with ContentWatcher (which returns {false, ""} content
changes on missing nodes) and enables simpler code when adding relations (by
deferring role node creation until unit add time in all cases, rather than
just for container-scoped relations).R=niemeyer
CC=
https://codereview. appspot. com/6301066 - 221. By Roger Peppe
-
state: automatically assign machines to subsidiary units from their principal units.
R=bcsaller, niemeyer, TheMue
CC=
https://codereview. appspot. com/6299073 - 220. By Frank Mueller
-
state: Second approach to improve the error handling.
This second approach returns error generated with fmt.Errorf()
and containing high-level informations like names. In case of
an internal error this is appended or, in case of some topology
errors, directly returned. This depends on the returned error
of the topology method.R=niemeyer, rog
CC=
https://codereview. appspot. com/6296064 - 219. By William Reade
-
state: make watcher termination clearer, safer
The existing watcher implementations shared a flawed implementation
style: specifically, that closure of the input channel was not
treated as an error. This is presumably because the Stop method also
stopped the watcher that provided the input channel, and this could
happen at any time whatsoever (from the perspective of the loop goroutine);
however, this then meant that the loop goroutine treated any of the
following conditions:* watcher itself was stopped by tomb.Kill()
* subwatcher was stopped by tomb.Kill()
* subwatcher channel closed for some other reason...as a sign of clean shutdown; that is to say that all subwatcher errors
were being swallowed silently.This CL avoids that problem by deferring all cleanup work, including
stoppage of the subwatcher, to the end of the loop goroutine; and, in
the loop goroutine, treating <-w.tomb.Dying() as the only reason to
terminate cleanly. Unexpected closures of the input channel are hence
correctly detected as errors; and expected closures (caused by cleanup)
will never be detected, because we will always have exited the
goroutine loop by the time we stop the subwatcher.As a consequence, there is no need to wait on subwatcher death: if it dies
while the loop is running the error will be detected soon enough via the
closure of the input channel; but that's not really any reason to abort a
send on the output channel.Original discussion on https:/
/codereview. appspot. com/6300085/ (which I
appear to have proposed for the wrong target somehow).
LGTM with the suggestion, but please wait for rog as he has been
following this closely.
https:/ /codereview. appspot. com/6305105/ diff/1/ environs/ dummy/environs. go dummy/environs. go (right):
File environs/
https:/ /codereview. appspot. com/6305105/ diff/1/ environs/ dummy/environs. go#newcode60 dummy/environs. go:60: Instances []environs.Instance
environs/
The idea of extending the operation with more details of what was
actually done sounds reasonable (except for the fact that mocking isn't
great, but we're past that point already).
That said, enriching the Operation type with every single field that may
be used by every single operation, and then having to know which
operations set which fields with which names, sounds a bit like we'll
get in a messy situation very soon.
We don't have to refactor it yet onto something more rich (like a type
per op, for example) but I suggest at least being more specific about
what the field means. I suggest something like StartedInstances and
StoppedInstances for now, so there's absolutely no doubts about what
they mean and when they are set. Then, let's watch to see how the
pattern grows.
https:/ /codereview. appspot. com/6305105/