Overall from first pass review looks pretty solid. I really like the approach of relying on the .changes file for carrying the Vcs-* stuff. Not having to do upload tagging will be a welcome usability improvement.
I suppose the main thing you're looking for feedback on is about using git cli vs pygit2 for retrieving arbitrary urls. I see what you mean that the api as-is requires creating a local repository object first; all the examples online I see for this are taking that approach. I suppose you could define a namespace to import the temporary repositories into, however I also don't see too much problem just calling the git cli directly in this case.
Taking a step back, git-ubuntu seems to end up needing to use this type of workaround often with pygit2, which makes me wonder if it would be worth the effort to introduce a wrapper class with a convenience API that hides the details of whether pygit2 or the git cli is used? Seems like it would be cleaner, although not sure it'd be worth the effort. I vaguely recall we discussed something like this before... Anyway just a thought.
Beyond that, a few small notes fwiw:
84e0a21bc1ad00497893a9e89b58484d48daef6a
- Good refactoring to move warnings up a level
- test_validate_upload_tag() - In codedocs the type of expected_result is variable since it could be either a bool or an exception class. (Might be clearer to have expected_exception as a separate parameter for clarity, rather than overloading expected_result?)
9ae83e0d9a6e80e2160531b6023b48fb82c04fc6
- I wonder if validate_rich_history()'s code description would benefit from a bit of elaboration now that it's got an extra level of indirection
- "to that upload tag" -> "to the upload tag"
ae13bed802f8c4626c2c41ce8eb910c50253b03d
- LGTM
db9c906fb0cc4cbb4ea9706999dfff304d5f7970
- If you plan to add pydocs for the parse_* routines, it could handy to have an example of the line(s) the routine will parse as valid. Or even just having that present in a comment would be useful.
- Presumably this relies on code changes to dput and/or launchpad, so links to appropriate references (bugs or commits) in the changelog message or comments might be useful for us to check on status of those external development tasks later.
- Is the output of get_changes_file_url() reliable/dependable enough that we don't need to validate url? (I'm imagining validating would be checking it's a valid url and that the host is launchpad.net, etc.)
- Should urllib.request.urlopen try and catch 404's, et al.? Assuming the url is always a launchpad url, most cases of fetch errors will likely be launchpad service outages or similar.
- Alternate way to check for keys:
- has_required_fields = all(
- field in changes
- for field in ['Vcs-Git', 'Vcs-Git-Commit', 'Vcs-Git-Refs']
- )
- if not has_required_fields:
- return None
+ if not changes.keys() >= ['Vcs-Git', 'Vcs-Git-Commit', 'Vcs-Git-Refs']:
+ return None
- Test cases are a bit hard to grok
Overall from first pass review looks pretty solid. I really like the approach of relying on the .changes file for carrying the Vcs-* stuff. Not having to do upload tagging will be a welcome usability improvement.
I suppose the main thing you're looking for feedback on is about using git cli vs pygit2 for retrieving arbitrary urls. I see what you mean that the api as-is requires creating a local repository object first; all the examples online I see for this are taking that approach. I suppose you could define a namespace to import the temporary repositories into, however I also don't see too much problem just calling the git cli directly in this case.
Taking a step back, git-ubuntu seems to end up needing to use this type of workaround often with pygit2, which makes me wonder if it would be worth the effort to introduce a wrapper class with a convenience API that hides the details of whether pygit2 or the git cli is used? Seems like it would be cleaner, although not sure it'd be worth the effort. I vaguely recall we discussed something like this before... Anyway just a thought.
Beyond that, a few small notes fwiw:
84e0a21bc1ad004 97893a9e89b5848 4d48daef6a upload_ tag() - In codedocs the type of expected_result is variable since it could be either a bool or an exception class. (Might be clearer to have expected_exception as a separate parameter for clarity, rather than overloading expected_result?)
- Good refactoring to move warnings up a level
- test_validate_
9ae83e0d9a6e80e 2160531b6023b48 fb82c04fc6 rich_history( )'s code description would benefit from a bit of elaboration now that it's got an extra level of indirection
- I wonder if validate_
- "to that upload tag" -> "to the upload tag"
ae13bed802f8c46 26c2c41ce8eb910 c50253b03d
- LGTM
db9c906fb0cc4cb b4ea9706999dfff 304d5f7970 file_url( ) reliable/dependable enough that we don't need to validate url? (I'm imagining validating would be checking it's a valid url and that the host is launchpad.net, etc.) request. urlopen try and catch 404's, et al.? Assuming the url is always a launchpad url, most cases of fetch errors will likely be launchpad service outages or similar. fields:
- If you plan to add pydocs for the parse_* routines, it could handy to have an example of the line(s) the routine will parse as valid. Or even just having that present in a comment would be useful.
- Presumably this relies on code changes to dput and/or launchpad, so links to appropriate references (bugs or commits) in the changelog message or comments might be useful for us to check on status of those external development tasks later.
- Is the output of get_changes_
- Should urllib.
- Alternate way to check for keys:
- has_required_fields = all(
- field in changes
- for field in ['Vcs-Git', 'Vcs-Git-Commit', 'Vcs-Git-Refs']
- )
- if not has_required_
- return None
+ if not changes.keys() >= ['Vcs-Git', 'Vcs-Git-Commit', 'Vcs-Git-Refs']:
+ return None
- Test cases are a bit hard to grok