Merge lp://staging/~cbehrens/nova/servers-search into lp://staging/~hudson-openstack/nova/trunk

Proposed by Chris Behrens
Status: Merged
Approved by: Josh Kearney
Approved revision: 1305
Merged at revision: 1410
Proposed branch: lp://staging/~cbehrens/nova/servers-search
Merge into: lp://staging/~hudson-openstack/nova/trunk
Diff against target: 1621 lines (+1110/-135)
12 files modified
nova/api/ec2/cloud.py (+30/-12)
nova/api/openstack/common.py (+33/-0)
nova/api/openstack/servers.py (+80/-17)
nova/api/openstack/views/servers.py (+1/-15)
nova/compute/api.py (+57/-31)
nova/db/api.py (+16/-10)
nova/db/sqlalchemy/api.py (+149/-44)
nova/db/sqlalchemy/models.py (+1/-0)
nova/tests/api/openstack/fakes.py (+6/-2)
nova/tests/api/openstack/test_servers.py (+275/-2)
nova/tests/test_compute.py (+457/-1)
nova/tests/test_metadata.py (+5/-1)
To merge this branch: bzr merge lp://staging/~cbehrens/nova/servers-search
Reviewer Review Type Date Requested Status
Brian Waldon (community) Approve
Rick Harris (community) Approve
Brian Lamar (community) Abstain
Jay Pipes (community) Needs Fixing
Review via email: mp+67765@code.staging.launchpad.net

Description of the change

This adds the servers search capabilities defined in the OS API v1.1 spec.. and more for admins.

For users, flavor=, image=, status=, and name= can be specified. name= supports regular expression matching.
Most other options are ignored. (things outside of the spec like 'recurse_zones' and 'reservation_id' still work, also)

If admin_api is enabled and context is an admin: along with the above, one can specify ip= and ip6= which will do regular expression matching. Also, any other 'Instance' column name can be specified, so you can do regexp matching there as well. Unknown Instance columns are ignored.

Also fixes up fixed_ip=, making a 404 returned vs a 500 error... and handling this properly with zone recursion as well.

To post a comment you must log in.
Revision history for this message
Brian Lamar (blamar) wrote :

This seems like a pretty inefficient way to do filtering if all we're looking for is wildcard matching. Why return everything when SQLAlchemy should successfully provide filtering across multiple backends? Can this be done using something like...

query.filter(Instance.name.like('%test%'))

...rather than returning each row and doing a regex?

review: Needs Fixing
Revision history for this message
Jay Pipes (jaypipes) wrote :

Hi Chris!

Few little comments...

190 + # See if a valid search option was passed in.
191 + # Ignore unknown search options for possible forward compatability.
192 + # Raise an exception if more than 1 search option is specified
193 + found_opt = None
194 + for opt in exclusive_opts:
195 + v = search_opts.get(opt, None)
196 + if v:
197 + if found_opt is None:
198 + found_opt = opt
199 + else:
200 + LOG.error(_("More than 1 mutually exclusive "
201 + "search option specified (%(found_opt)s and "
202 + "%(opt)s were both specified") % locals())
203 + raise exception.InvalidInput(reason=_(
204 + "More than 1 mutually exclusive "
205 + "search option specified (%(found_opt)s and "
206 + "%(opt)s were both specified") % locals())

Might be more succinctly written as:

found_opt = None
found_opts = [opt for opt in exclusive_opts
              if search_opts.get(opt, None)]
if len(found_opts) > 1:
    found_opt_str = ", ".join(found_opts)
    msg = _("More than 1 mutually exclusive "
            "search option specified: %(found_opt_str)s")
            % locals())
    logger.error(msg)
    raise exception.InvalidInput(reason=msg)

if found_opts:
    found_opt = found_opts[0]

I recognize (and agree with your decision) not to do regexp matching via the database. Not only is it not portable, it's not any more efficient to do that at the database level (still requires a scan of all pre-restricted rows anyway...).

However this method signature:

+def instance_get_all_by_name_regexp(context, ipv6_regexp):

Looked strange, perhaps a copy/paste error? Should ipv6_regexp be name_regexp?

Cheers!
jay

review: Needs Fixing
Revision history for this message
Brian Lamar (blamar) wrote :

> I recognize (and agree with your decision) not to do regexp matching via the database. Not only is
> it not portable, it's not any more efficient to do that at the database level (still requires a
> scan of all pre-restricted rows anyway...).

Regular expressions are more expensive than LIKE matches (which in their own right, are pretty expensive). Do we really want operators doing complex regexs? At that point we should be putting our data into a purpose-built search indexing solution like Lucene/Solr/ElasticSearch/Sphinx because that's what they're good at.

Revision history for this message
Jay Pipes (jaypipes) wrote :

> > I recognize (and agree with your decision) not to do regexp matching via the
> database. Not only is
> > it not portable, it's not any more efficient to do that at the database
> level (still requires a
> > scan of all pre-restricted rows anyway...).
>
> Regular expressions are more expensive than LIKE matches (which in their own
> right, are pretty expensive).

Actually, this is incorrect. LIKE '%something%' and column REGEXP 'someregexp' will produce identical query execution plans. The complexity of the REGEXP determines whether or not a simple string match such as '%something%' would be computationally more expensive to execute per row than a compiled regexp match.

> Do we really want operators doing complex
> regexs? At that point we should be putting our data into a purpose-built
> search indexing solution like Lucene/Solr/ElasticSearch/Sphinx because that's
> what they're good at.

Lucene/Solar/ElasticSearch/Sphinx are fulltext indexing technologies. What's happening here is looking for a particular pattern in a short string. The solution presented here is flexible enough to query for various IP(v6) and name patterns without having to set up a separate fulltext indexing server for this kind of thing, which I think would be overboard.

I understand your concern about the regexp inefficiency. Just saying that it's not that much less efficient than doing a REGEXP or LIKE '%something%' expression in SQL. The same loop and match process is occurring in Python code versus C code. The problem is that not all DBs support the REGEXP operator...

Just my two cents,
-jay

Revision history for this message
Chris Behrens (cbehrens) wrote :

Brian:

Yeah, I'm aware of everything in your first comment. I started out using 'like', but I ditched it for a few reasons.

1) Some of the things I'm matching against are not stored in the database. IPv6 addresses and the instance 'name', in particular. So, I'd have to build search functions in python that can parse '%..%'. That.. or I'd end up supporting regular expressions for some queries and the sql 'like' format for others.
2) Related to #1 in a way, what if the backend ends up not being SQL at all?
3) 'like' does a full table scan anyway and regular expressions are a little more flexible.

That said, using 'like' is going to be more efficient because I think it'll result in less 'join's. But, really, there's way too many joins in all of the previously existing instance getting code anyway. I duplicated what was used previously, but I don't think all of them are necessary. Someone really needs to go through and do an audit on all of these DB calls and find a way to not have to join stuff that isn't going to be used by the caller. I'd rather that I not have to be done as a part of this merge... as that'll be a huge task. :)

Revision history for this message
Chris Behrens (cbehrens) wrote :

Jay: Ah, thanks! I'll get those fixed up.

Revision history for this message
Brian Waldon (bcwaldon) wrote :

Is it necessary to have a separate db api method for each query string parameter? This seems like it could cause unnecessary growth in that module.

This doesn't seem to match the latest v1.1 spec. Is there a reason for the divergence?

review: Needs Information
Revision history for this message
Chris Behrens (cbehrens) wrote :

> Is it necessary to have a separate db api method for each query string
> parameter? This seems like it could cause unnecessary growth in that module.
>
> This doesn't seem to match the latest v1.1 spec. Is there a reason for the
> divergence?

Will check the spec.

As far as the separate db api methods... just to clarify... did you mean in db/api or actually in compute/api? Or both? Mostly everything can use the search "by_column" method I added. That tries to generically support columns by doing a 'getattr' on the Instance class for the column value to match. But, searching by 'ip', 'ipv6', and 'name' had to be separate because:

1) 'ip' matches against the FixedIp and FloatingIP tables, instead..
2) 'ipv6' isn't stored in the database
3) 'name' isn't stored in the database either... it's a @property of the class and a getattr() doesn't return the value directly... it returns a Property instance IIRC.

Revision history for this message
Brian Lamar (blamar) wrote :

> > > I recognize (and agree with your decision) not to do regexp matching via
> the
> > database. Not only is
> > > it not portable, it's not any more efficient to do that at the database
> > level (still requires a
> > > scan of all pre-restricted rows anyway...).
> >
> > Regular expressions are more expensive than LIKE matches (which in their own
> > right, are pretty expensive).
>
> Actually, this is incorrect. LIKE '%something%' and column REGEXP 'someregexp'
> will produce identical query execution plans. The complexity of the REGEXP
> determines whether or not a simple string match such as '%something%' would be
> computationally more expensive to execute per row than a compiled regexp
> match.
>
> > Do we really want operators doing complex
> > regexs? At that point we should be putting our data into a purpose-built
> > search indexing solution like Lucene/Solr/ElasticSearch/Sphinx because
> that's
> > what they're good at.
>
> Lucene/Solar/ElasticSearch/Sphinx are fulltext indexing technologies. What's
> happening here is looking for a particular pattern in a short string. The
> solution presented here is flexible enough to query for various IP(v6) and
> name patterns without having to set up a separate fulltext indexing server for
> this kind of thing, which I think would be overboard.
>
> I understand your concern about the regexp inefficiency. Just saying that it's
> not that much less efficient than doing a REGEXP or LIKE '%something%'
> expression in SQL. The same loop and match process is occurring in Python code
> versus C code. The problem is that not all DBs support the REGEXP operator...
>
> Just my two cents,
> -jay

Wow, just gonna have to agree to disagree then... I'm really positive it's going to be tons more efficient to do simple wildcard matching vs. fully-fledged perl-compatible regular expressions. I'm all about not putting specialized code into projects and I see this as being search-specialized code. I'm going to bow out now because I don't want to further this discussion on a merge-prop (sorry Chris!!).

> 1) Some of the things I'm matching against are not stored in the database. IPv6 addresses and the
> instance 'name', in particular. So, I'd have to build search functions in python that can parse
> '%..%'. That.. or I'd end up supporting regular expressions for some queries and the sql 'like'
> format for others.

Yeah, I can't believe that v6 address aren't stored in the DB. :) Good point.

> 2) Related to #1 in a way, what if the backend ends up not being SQL at all?

Heh, good luck with this one, but the point I'm trying to make is compatible with this as well. I use ElasticSearch with some of my CouchDB projects and there are tons of indexers for all types of datastores.

> 3) 'like' does a full table scan anyway and regular expressions are a little more flexible.

Absolutely, I just feel that we should be leaving the database to do the data-storage, a search-indexer to do searching, and Python to tie everything together.

Going to abstain from this merge prop, my apologies for cluttering it with conversations that most assuredly should have happened at the Blueprint level.

review: Abstain
Revision history for this message
Chris Behrens (cbehrens) wrote :

blamar: No problem, I understand. And FWIW, it's not 'pcre'.. it's just the python 're' module.. only slightly better performance-wise, probably. :)

Revision history for this message
Chris Behrens (cbehrens) wrote :

Ok. Per discussion with bcwaldon, I'm going to make the "name" query string option map to searching by display name. The spec sys name=serverName, and I got clarification that serverName means what the user sees... which is the Instance.display_name.

The spec doesn't mention 'instance_name' (Instance.name property in nova), but admins will want that. So, I'll make "instance_name" option map to what "name" did, which was search by this instance name. instance_name will have admin_api and admin context checking.

I'll remove searching by 'server_name='. It's always NULL in my tables, anyway, and it's not clear what this is really supposed to be used for (I haven't looked in the code, honestly). It's not mentioned in the spec, and it doesn't seem to be an immediate need... so...

Revision history for this message
Chris Behrens (cbehrens) wrote :

After reviewing section 4.1.1 in the v1.1 API spec regarding 'list', I went ahead and added some of the other search options like 'image', 'flavor', and status... along with making the previous changes that were discussed. Ie:

name matches display_name
instance_name is admin API/admin context only
I also made ip and ip6 admin-only due to it not being in the spec.
Unit tests have been added for OS API

Revision history for this message
Brian Waldon (bcwaldon) wrote :

A branch recently went in that replaced all returns of fault-wrapped webob exceptions with raising of non-wrapped webob exceptions. Can you convert any of the "return faults.Fault(...)" to "raise ..."?

I think we may want to take the approach of ignoring unsupported query params rather than raising an exception (that's how Glance does it). We should be liberal in what we accept. A lot of the _get_items / _items code can get cleaned back up if we don't need to explicitly check filter params. Same goes for check_option_permissions.

Power states belong to the compute layer, and server statuses belong to the OSAPI. I think we should remove any OSAPI status-related code from the power_states module and place it in either api.openstack.common or api.openstack.servers. What do you think?

What is the 'fresh' query filter? I just started seeing that in novaclient requests.

Are there future plans to implement changes-since? I can see it is an accepted filter, just not respected.

In the DB API, why is there a _get_all_by_column_regexp and a _get_all_by_instance_name? Can't we use get_all_by_column_regexp to accomplish the name filter?

Can you change _get_all_by_column to _get_all_by_column_value? The former doesn't clearly say you are checking the value in a specific column when there is another function with column_regexp.

I don't see it necessary to list OSAPI-specific filters in the compute api. Can we fix that known_params?

I really want to be able to sort by multiple parameters, and I think most people using the API would expect to be able to do that. How hard would it be to support this?

Don't get me wrong here, I'm really excited to get this feature in. Having done this in Glance already, I know this isn't a small undertaking :)

review: Needs Fixing
Revision history for this message
Chris Behrens (cbehrens) wrote :
Download full text (5.2 KiB)

> A branch recently went in that replaced all returns of fault-wrapped webob
> exceptions with raising of non-wrapped webob exceptions. Can you convert any
> of the "return faults.Fault(...)" to "raise ..."?

Aha.. that explains something. :) Sure.

> I think we may want to take the approach of ignoring unsupported query params
> rather than raising an exception (that's how Glance does it). We should be
> liberal in what we accept. A lot of the _get_items / _items code can get
> cleaned back up if we don't need to explicitly check filter params. Same goes
> for check_option_permissions.

Ok. I was thinking about v1.1 and trying to 'guess' at what I should do here based on the possible response codes to the query. I agree with what you say, though. I'd rather allow and ignore search options we don't understand. That will clean things up slightly. I still need to check for permissions on certain search items that should be admin-api only, but I can cut the 2 lists of options down to 1.. (list of options that require admin)

>
> Power states belong to the compute layer, and server statuses belong to the
> OSAPI. I think we should remove any OSAPI status-related code from the
> power_states module and place it in either api.openstack.common or
> api.openstack.servers. What do you think?

I had the same debate. I was going to move it to api.openstack.common.. until I realized that compute.api needs to send the 'status' to search by to child zones. This means that compute.api would end up making a call into api.openstack.common to convert a power state to the status to search for.. and that didn't feel quite right. I think I can solve that differently. I can make the OS API pass the 'status' to search for through to compute.api (along with 'state' which is the translation of status to power state). compute.api will ignore 'status' and use 'state' but it'll pass 'status' on to novaclient to talk to zones.

That or I could leave it the way it is. :) Changing it probably makes sense so there's clear separation.

>
> What is the 'fresh' query filter? I just started seeing that in novaclient
> requests.

Honestly, I have no clue. :-/ But I know that novaclient sends it. I figured it was v1.0 equiv of 'changes-since' that's in v1.1.

>
> Are there future plans to implement changes-since? I can see it is an accepted
> filter, just not respected.

Yeah, good question. I'm already going beyond the scope of the story that was given to us for the sprint, and that one seemed a little more difficult. Wasn't sure if compute.api should filter that.. or let OS API filter on the return to the client. I thought maybe titan would have a better idea of how that should be done. ;)

>
> In the DB API, why is there a _get_all_by_column_regexp and a
> _get_all_by_instance_name? Can't we use get_all_by_column_regexp to accomplish
> the name filter?

Let me double check that. I saw a problem early on with this, because 'name' is a @property in the Instance class and I seemed to be getting back a property object with getattr. I bet I was doing testing with a class and not an instance of a class at the time, however. I bet this can be consolidated.

> ...

Read more...

Revision history for this message
Brian Waldon (bcwaldon) wrote :

> > Power states belong to the compute layer, and server statuses belong to the
> > OSAPI. I think we should remove any OSAPI status-related code from the
> > power_states module and place it in either api.openstack.common or
> > api.openstack.servers. What do you think?
>
> I had the same debate. I was going to move it to api.openstack.common.. until
> I realized that compute.api needs to send the 'status' to search by to child
> zones. This means that compute.api would end up making a call into
> api.openstack.common to convert a power state to the status to search for..
> and that didn't feel quite right. I think I can solve that differently. I
> can make the OS API pass the 'status' to search for through to compute.api
> (along with 'state' which is the translation of status to power state).
> compute.api will ignore 'status' and use 'state' but it'll pass 'status' on to
> novaclient to talk to zones.
>
> That or I could leave it the way it is. :) Changing it probably makes sense
> so there's clear separation.

Yeah, I really don't want the compute api (the power states module) to depend on concepts defined in the openstack api.

> > Are there future plans to implement changes-since? I can see it is an
> accepted
> > filter, just not respected.
>
> Yeah, good question. I'm already going beyond the scope of the story that was
> given to us for the sprint, and that one seemed a little more difficult.
> Wasn't sure if compute.api should filter that.. or let OS API filter on the
> return to the client. I thought maybe titan would have a better idea of how
> that should be done. ;)

We got it :)

> > I don't see it necessary to list OSAPI-specific filters in the compute api.
> > Can we fix that known_params?
>
> Well, I need a table to map the search parameter to the function to call with
> its arguments. In the initial merge prop, I didn't have this as I used
> getattr(self, '_get_all_by_%s' % opt_name).. and called that function. The
> problem is that the generic 'column' ones need an extra argument. The
> original merge prop had a list of those search options that could be done
> generically. I could change it back... but it'd also require moving all of
> the _get_all_by* calls back outside of 'def get_all' so I could use
> getattr(self, ..) again.
>
> Hm.. did that make sense? We can chat on IRC to clarify if needed.

What bothers me is the inclusion of non-filter query params (like changes-since). If those would just go away I would be more okay with it.

> > I really want to be able to sort by multiple parameters, and I think most
> > people using the API would expect to be able to do that. How hard would it
> be
> > to support this?
>
> Yeah, I hear you. The story I was given says to throw an error if multiple
> are given, so that's the main reason why I didn't code support for it. :) My
> instinct is to say it's rather difficult... but when I really start to think
> about it, I'm not sure it's so bad. I can think about it some more while
> doing the other fixups.

I don't mean to toot my own horn here, but look at glance and see if that approach would work.

Revision history for this message
Chris Behrens (cbehrens) wrote :

Alright. This is almost a re-write since the original merge prop, now. I updated the description up at the top to better reflect what this ends up adding.

Fire away.. :)

Revision history for this message
Ed Leafe (ed-leafe) wrote :

Only got through the first 700 lines of the diff so far, but here's what I have:

Lines 134 -143 can be more efficiently written:

def power_states_from_status(status):
    """Map the server status string to a list of power states"""
    low_status = status.strip().lower()
    power_states = [val for key, val in _STATUS_MAP.iteritems()
      if key.lower() == low_status
      and val is not None]
    return power_states
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

The method '_check_servers_options()' would be better named something like '_remove_invalid_options()' so that the fact that it modifies search_opts is more clearly expressed.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Line 643: What is this supposed to be doing? Did you maybe mean to copy the 'filters' dict to another name?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Lines 660 -664 can be more efficiently written:

for filter_name in query_filters:
    query_prefix = _exact_match_filter(query_prefix, filter_name,
            filters.pop(filter_name))

Revision history for this message
Chris Behrens (cbehrens) wrote :

Ed:

Ok re: _check_servers_options().

643: I meant to make a copy of it to use going forward.. I just reassigned it to same variable name. I'm making a copy because I'm modifying it and I can't have it affect the caller.

660-664: Good call... makes sense.

Revision history for this message
Chris Behrens (cbehrens) wrote :

Updated. Added a comment around #643 to clarify the .copy().

Revision history for this message
Brian Waldon (bcwaldon) wrote :

This looks great, Mr. Behrens. I like the db-api implementation a lot more. I've got just a few comments:

I noticed you linked a lazy load bug to this. Can you explain how this branch addresses that?

164: Instead of using an instance variable, can you implement this with constant tuples that are passed into _remove_invalid_options as an 'allowed_search_options' argument?

217: I'm not crazy about this method name. How about just _get_servers?
215: This is only called once, so does it make sense to break it out into its own method? I think it all fits under _servers_from_request (or whatever its new name might be ;)

392: There's a pattern here that is repeated a few times. Maybe you can create a mapping dictionary for converting key names

419: This behavior will also happen through OSAPI, right? I'm not so sure we want to support special cases like this. It's a totally valid request to filter by fixed_ip and name where no results are returned.

480: This behavior seems odd. I'm thinking we should just return an empty list. What do you think?

Revision history for this message
Chris Behrens (cbehrens) wrote :

lazy load: The fix is actually described in the thread with the bug, but: OS API accesses Instance['virtual_interfaces']['network'], but instance_get_all() does not joinedload('virtual_interfaces.network'). When the sqlalchemy session becomes detached, OS API cannot do the query for 'network' when it's accessed. My new DB API routine joins it. I didn't add it to any other queries, because the OS API may be the only place where it's needed.

164: Sure

217: Ok.

215: I'd originally broken it out because I split the caller into the 1.0/1.1 Controllers, so it had 2 references. I'll merge it back in, since that's not the case anymore.

392: I didn't use a mapping table because the flipping needs to happen in a certain order (since
i'm renaming name -> display_name, but instance_name to name. Doing those in reverse order would clobber the 'name' if both exist in the query). But I know how I can solve that.

419: Yeah, I did this because I'm not sure 'fixed_ip' should be used via OS API. One should use 'ip' instead. If it's for EC2 only, fixed_ip this way is a lot more efficient query since it looks up via FixedIps table directly with no joins to find the entry. Decisions like these are difficult, as I'm not entirely sure what's most desirable to the community...but generally there is some sort of reason to the method of my madness. :) I'm cool with changing it.

480: I feel this way, too.. I was just trying to match the original behavior in single zone. I guess if EC2 should really return an error in this case, the error return should happen in the ec2 controller if it gets an empty list back.

Revision history for this message
Chris Behrens (cbehrens) wrote :

Ok, I think all of those issues are addressed.

Revision history for this message
Brian Waldon (bcwaldon) wrote :

Thanks for addressing those issues. Code looks good to me, now, but I'm seeing one test failure:

======================================================================
FAIL: test_get_servers_allows_status_v1_1 (nova.tests.api.openstack.test_servers.ServersTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/brian.waldon/valhalla/nova/servers-search/nova/tests/api/openstack/test_servers.py", line 1184, in test_get_servers_allows_status_v1_1
    self.assertEqual(res.status_int, 200)
AssertionError: 500 != 200
-------------------- >> begin captured logging << --------------------

...

(nova.api.openstack): TRACE: File "/Users/brian.waldon/valhalla/nova/servers-search/nova/api/openstack/servers.py", line 56, in index
(nova.api.openstack): TRACE: servers = self._get_servers(req, is_detail=False)
(nova.api.openstack): TRACE: File "/Users/brian.waldon/valhalla/nova/servers-search/nova/api/openstack/servers.py", line 126, in _get_servers
(nova.api.openstack): TRACE: context, search_opts=search_opts)
(nova.api.openstack): TRACE: File "/Users/brian.waldon/valhalla/nova/servers-search/nova/tests/api/openstack/test_servers.py", line 1174, in fake_get_all
(nova.api.openstack): TRACE: [power_state.RUNNING, power_state.BLOCKED])
(nova.api.openstack): TRACE: File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/unittest.py", line 350, in failUnlessEqual
(nova.api.openstack): TRACE: (msg or '%r != %r' % (first, second))
(nova.api.openstack): TRACE: AssertionError: [2, 1] != [1, 2]
(nova.api.openstack): TRACE:
--------------------- >> end captured logging << ---------------------

----------------------------------------------------------------------

Revision history for this message
Rick Harris (rconradharris) wrote :

Tests pass (for me), and code looks good. Really nice work here Chris.

review: Approve
Revision history for this message
Chris Behrens (cbehrens) wrote :

bcwaldon: Issue is this:

[15:51:38] <comstud> 1173 self.assertEqual(search_opts['state'],
[15:51:39] <comstud> 1174 [power_state.RUNNING,
                     power_state.BLOCKED])

The list is in reverse order in your test and I'm assuming order here. I've just wrapped those lists with set() so it does a set comparison. Try now. :)

Revision history for this message
Brian Waldon (bcwaldon) wrote :

Yep, all good now. Thank you, Dr. Behrens!

review: Approve
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Attempt to merge into lp:nova failed due to conflicts:

text conflict in nova/api/ec2/cloud.py
text conflict in nova/tests/test_metadata.py

Revision history for this message
Chris Behrens (cbehrens) wrote :

Merged trunk and resolved 2 conflicts. Need the approves again.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
The diff is not available at this time. You can reload the page or download it.