Merge lp://staging/~stub/pqm/single-script into lp://staging/pqm
- single-script
- Merge into trunk
Status: | Rejected |
---|---|
Rejected by: | Robert Collins |
Proposed branch: | lp://staging/~stub/pqm/single-script |
Merge into: | lp://staging/pqm |
Diff against target: |
183 lines (has conflicts)
Text conflict in bin/pqm |
To merge this branch: | bzr merge lp://staging/~stub/pqm/single-script |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Robert Collins | Pending | ||
Review via email: mp+1741@code.staging.launchpad.net |
Commit message
Description of the change
Stuart Bishop (stub) wrote : | # |
Robert Collins (lifeless) wrote : | # |
On Mon, 2008-11-17 at 10:42 +0000, Stuart Bishop wrote:
> This branch adds an option to process a single item in the queue rather than all items in the queue.
>
> Launchpad development needs this as scripts regularly changing the config file used by pqm and we need these changes to take effect immediately after the current job.
Would it be worth just making pqm reevaluate the config after every
script? That would seem to have little performance impact and be simpler
for users than an option to unbreak things.
-Rob
--
GPG key available at: <http://
Stuart Bishop (stub) wrote : | # |
On Mon, Nov 17, 2008 at 6:42 PM, Robert Collins
<email address hidden> wrote:
> On Mon, 2008-11-17 at 10:42 +0000, Stuart Bishop wrote:
>> This branch adds an option to process a single item in the queue rather than all items in the queue.
>>
>> Launchpad development needs this as scripts regularly changing the config file used by pqm and we need these changes to take effect immediately after the current job.
>
> Would it be worth just making pqm reevaluate the config after every
> script? That would seem to have little performance impact and be simpler
> for users than an option to unbreak things.
This would involve us altering the config in place or using a symlink
to switch between configs. We could do it that way, but it seems
uglier and gives us no easy way to tell which config a currently
running instance is using (hmm... that use case smells like a YAGNI,
but anyway...)
One thing I was thinking of - depending on how our experiment with
continuous integration instead of test-before-commit goes, we might
want to implement a hold/skip filter to PQM.These jobs are just
skipped rather than being rejected or processed. This would allow jobs
to be backed up when PQM is running in a mode where they cannot be
landed rather than rejecting them and requiring people to resubmit. If
we took the approach of reloading the config after each run, rather
than just processing a single job each invocation, then jobs may end
up being processed out of order when the config is changed several
times when the queue is being processed. This again is probably a
YAGNI.
Reloading the config would be useful for other tasks though, such as
most of the use cases for stop.patch.
--
Stuart Bishop <email address hidden>
http://
Robert Collins (lifeless) wrote : | # |
On Mon, 2008-11-17 at 15:33 +0000, Stuart Bishop wrote:
>
> > Would it be worth just making pqm reevaluate the config after every
> > script? That would seem to have little performance impact and be
> simpler
> > for users than an option to unbreak things.
>
> This would involve us altering the config in place or using a symlink
> to switch between configs. We could do it that way, but it seems
> uglier and gives us no easy way to tell which config a currently
> running instance is using (hmm... that use case smells like a YAGNI,
> but anyway...)
How does the config get altered at the moment - is it a different
command line switch, env variable? I thought it was in place editing
anyhow :).
> One thing I was thinking of - depending on how our experiment with
> continuous integration instead of test-before-commit goes, we might
> want to implement a hold/skip filter to PQM. These jobs are just
> skipped rather than being rejected or processed. This would allow jobs
> to be backed up when PQM is running in a mode where they cannot be
> landed rather than rejecting them and requiring people to resubmit. If
> we took the approach of reloading the config after each run, rather
> than just processing a single job each invocation, then jobs may end
> up being processed out of order when the config is changed several
> times when the queue is being processed. This again is probably a
> YAGNI.
Well, reloading the config will refresh the queue (because the queuedir
is defined by the config), so it should, for a naive-but-obvious
implementation behave identically to
pqm --one --run
> Reloading the config would be useful for other tasks though, such as
> most of the use cases for stop.patch.
Agreed. So it sounds to me like reloading the config is something that
would be generally useful, and it avoids a new option; lets reload the
config everytime.
-Rob
--
GPG key available at: <http://
Unmerged revisions
- 184. By Stuart Bishop
-
Add command line option to only process a single queued job
Preview Diff
1 | === modified file 'bin/pqm' |
2 | --- bin/pqm 2008-11-21 00:43:51 +0000 |
3 | +++ bin/pqm 2009-03-05 06:50:27 +0000 |
4 | @@ -72,7 +72,140 @@ |
5 | raise PQMTlaFailure(sender, ["VCS command %s %s failed (%s): %s" % (cmd, args, status, msg)] + output) |
6 | return output |
7 | |
8 | +<<<<<<< TREE |
9 | def do_read_mode(logger, options, mail_reply, mail_server, from_address, fromaddr): |
10 | +======= |
11 | +def do_run_mode(queuedir, logger, logdir, mail_reply, mail_server, |
12 | + from_address, fromaddr, configp, options): |
13 | + scripts = find_patches( |
14 | + queuedir, logger, rev_optionhandler, configp, options) |
15 | + if options.single and scripts: |
16 | + # User elected to only process a single script this run. |
17 | + scripts = [scripts[0]] |
18 | + (goodscripts, badscripts) = ([], []) |
19 | + for script in scripts: |
20 | + if not os.path.isfile("%s/stop.patch" % queuedir): |
21 | + run_one_script(logger, script, logdir, goodscripts, badscripts, |
22 | + mail_reply, mail_server, from_address, fromaddr, options) |
23 | + |
24 | + if options.print_report: |
25 | + for (patchname, logname) in goodscripts: |
26 | + print "Patch: " + patchname |
27 | + print "Status: success" |
28 | + print "Log: " + logname |
29 | |
30 | + for (patchname, logname) in badscripts: |
31 | + print "Patch: " + patchname |
32 | + print "Status: failure" |
33 | + print "Log: " + logname |
34 | |
35 | + |
36 | +def run_one_script(logger, script, logdir, goodscripts, badscripts, |
37 | + mail_reply, mail_server, from_address, fromaddr, options): |
38 | + # FIXME: This is currently extremely hard to test. move it to the library, |
39 | + # and test it! |
40 | + try: |
41 | + success = False |
42 | + try: |
43 | + logger.info('trying script ' + script.filename) |
44 | + logname = os.path.join(logdir, os.path.basename(script.filename) + '.log') |
45 | + (sender, subject, msg, sig) = read_email(logger, open(script.filename)) |
46 | + if options.verify_sigs: |
47 | + sigid,siguid = verify_sig( |
48 | + script.getSender(), msg, sig, 0, logger, options.keyring) |
49 | + output = [] |
50 | + failedcmd=None |
51 | + |
52 | + # ugly transitional code |
53 | + pqm.logger = logger |
54 | + pqm.workdir = workdir |
55 | + pqm.runtla = runtla |
56 | + pqm.precommit_hook = precommit_hook |
57 | + (successes, unrecognized, output) = script.run() |
58 | + |
59 | + logger.info('successes: %s' % (successes,)) |
60 | + logger.info('unrecognized: %s' % (unrecognized,)) |
61 | + success = True |
62 | + goodscripts.append((script.filename, logname)) |
63 | + except PQMCmdFailure, e: |
64 | + badscripts.append((script.filename, logname)) |
65 | + successes = e.goodcmds |
66 | + failedcmd = e.badcmd |
67 | + output = e.output |
68 | + unrecognized=[] |
69 | + except PQMException, e: |
70 | + badscripts.append((script.filename, logname)) |
71 | + successes = [] |
72 | + failedcmd = [] |
73 | + output = [str(e)] |
74 | + unrecognized=[] |
75 | + except Exception, e: |
76 | + # catch all to ensure we get some output in uncaught failures |
77 | + output = [str(e)] |
78 | + raise |
79 | + if mail_reply: |
80 | + send_mail_reply(success, successes, unrecognized, |
81 | + mail_server, from_address, script.getSender(), |
82 | + fromaddr, failedcmd, output, script) |
83 | + else: |
84 | + logger.info('not sending mail reply') |
85 | + finally: |
86 | + # ensure we always unlink the script file. |
87 | + log_list(logname, output) |
88 | + os.unlink(script.filename) |
89 | + |
90 | +def send_mail_reply(success, successes, unrecognized, mail_server, from_address, sender, fromaddr, failedcmd, output, script): |
91 | + if success: |
92 | + retmesg = mail_format_successes(successes, "Command was successful.", unrecognized) |
93 | + if len(successes) > 0: |
94 | + statusmsg='success' |
95 | + else: |
96 | + statusmsg='no valid commands given' |
97 | + else: |
98 | + retmesg = mail_format_successes(successes, "Command passed checks, but was not committed.", unrecognized) |
99 | + retmesg+= "\n%s" % failedcmd |
100 | + retmesg+= '\nCommand failed!' |
101 | + if not script.debug: |
102 | + retmesg += '\nLast 20 lines of log output:' |
103 | + retmesg += ''.join(output[-20:]) |
104 | + else: |
105 | + retmesg += '\nAll lines of log output:' |
106 | + retmesg += ''.join(output) |
107 | + statusmsg='failure' |
108 | + server = smtplib.SMTP(mail_server) |
109 | + server.sendmail(from_address, [sender], 'From: %s\r\nTo: %s\r\nSubject: %s\r\n\r\n%s\n' % (fromaddr, sender, statusmsg, retmesg)) |
110 | + server.quit() |
111 | + |
112 | +def mail_format_successes(successes, command_msg, unrecognized): |
113 | + retmesg = [] |
114 | + for success in successes: |
115 | + retmesg.append('> ' + success) |
116 | + retmesg.append(command_msg) |
117 | + for line in unrecognized: |
118 | + retmesg.append('> ' + line) |
119 | + retmesg.append('Unrecognized command.') |
120 | + return string.join(retmesg, '\n') |
121 | + |
122 | +def log_list(logname, list): |
123 | + f = open(logname, 'w') |
124 | + for l in list: |
125 | + f.write(l) |
126 | + f.close() |
127 | + |
128 | +def run(pqm_subdir, queuedir, logger, logdir, mail_reply, mail_server, |
129 | + from_address, fromaddr, options): |
130 | + lockfile=LockFile(os.path.join(pqm_subdir, 'pqm.lock'), logger, |
131 | + options.no_act, options.cron_mode) |
132 | + lockfile.acquire() |
133 | + try: |
134 | + if options.run_mode: |
135 | + do_run_mode(queuedir, logger, logdir, mail_reply, mail_server, |
136 | + from_address, fromaddr, configp, options) |
137 | + finally: |
138 | + lockfile.release() |
139 | + |
140 | +def do_read_mode(logger, options): |
141 | +>>>>>>> MERGE-SOURCE |
142 | sender = None |
143 | try: |
144 | (sender, subject, msg, sig) = read_email(logger) |
145 | |
146 | === modified file 'pqm/commandline.py' |
147 | --- pqm/commandline.py 2008-07-16 13:19:53 +0000 |
148 | +++ pqm/commandline.py 2009-03-05 06:50:27 +0000 |
149 | @@ -7,6 +7,7 @@ |
150 | # Copyright © 2003, 2005 Walter Landry |
151 | # Copyright © 2008 Canonical Ltd. |
152 | # Author: Tim Penhey <tim@penhey.net> |
153 | +# Author: Stuart Bishop <stuart@stuartbishop.net> |
154 | |
155 | # This program is free software; you can redistribute it and/or modify |
156 | # it under the terms of the GNU General Public License as published by |
157 | @@ -42,7 +43,7 @@ |
158 | |
159 | |
160 | def default_pqm_config_files(): |
161 | - """Return the default config files taking into account the environment.""" |
162 | + """Return the single config files taking into account the environment.""" |
163 | if 'PQM_CONFIG' in os.environ: |
164 | file_names = [os.environ['PQM_CONFIG']] |
165 | else: |
166 | @@ -85,6 +86,7 @@ |
167 | read_mode=False, |
168 | run_mode=False, |
169 | print_report=False, |
170 | + single=False, |
171 | show_verison=False, |
172 | verify_sigs=True, |
173 | queuedir=None |
174 | @@ -113,6 +115,9 @@ |
175 | " (used with --run)")) |
176 | parser.add_option("--report", action="store_true", dest="print_report", |
177 | help="Print patch report (used with --run)") |
178 | + parser.add_option("--single", action="store_true", dest="single", |
179 | + help="Process a single job, rather than all queued jobs " |
180 | + "(used with --run)") |
181 | parser.add_option("--no-verify", action="store_false", dest="verify_sigs", |
182 | help="Don't verify signatures") |
183 | parser.add_option("--queuedir", metavar="DIR", |
This branch adds an option to process a single item in the queue rather than all items in the queue.
Launchpad development needs this as scripts regularly changing the config file used by pqm and we need these changes to take effect immediately after the current job.