Merge lp://staging/~brendan-donegan/checkbox/story871_session_restore_dbus into lp://staging/checkbox
Proposed by
Brendan Donegan
Status: | Merged |
---|---|
Approved by: | Brendan Donegan |
Approved revision: | 2289 |
Merged at revision: | 2289 |
Proposed branch: | lp://staging/~brendan-donegan/checkbox/story871_session_restore_dbus |
Merge into: | lp://staging/checkbox |
Diff against target: |
177 lines (+121/-0) 4 files modified
plainbox/contrib/dbus-mini-client.py (+19/-0) plainbox/plainbox/impl/service.py (+86/-0) plainbox/plainbox/impl/session/jobs.py (+12/-0) plainbox/plainbox/impl/session/test_jobs.py (+4/-0) |
To merge this branch: | bzr merge lp://staging/~brendan-donegan/checkbox/story871_session_restore_dbus |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Brendan Donegan (community) | Approve | ||
Daniel Manrique (community) | Approve | ||
Review via email:
|
Description of the change
Adds methods to the DBus API to allow saving and resuming of sessions. The main methods added are PreviousSessionFile which can be used to determine if there was a previous session saved, PersistentSave which actually saves the session, Resume which will restore the session to the saved state and Clean() which wipes the previously saved state.
There appear to be some issues integrating this functionality as running a resumed session seems to result in various kinds if crashes in the mini client.
To post a comment you must log in.
Hi!
OK, long comment here, because the reason why this code didn't entirely work was more intricate than we thought.
First, this will need rebasing after the recent changes to the miniclient to run local jobs first. It seems to be enough to run session_ object. PersistentSave( ) after run_local_jobs() (see the merge conflict, it'll be pretty obvious).
Second, I was able to replicate an "unable to lookup object wrapper" error. It's similar to the problem I worked on yesterday, but backwards, somehow! Luckily that gave me exactly the knowledge I needed to help debug this.
When starting a brand-new session, JobStateWrappers get created for each object in the session's job_state_map, these are based on existing JobDefinitions that are supplied by the provider.
Then, when local jobs run, they may create new JobDefinitions outside the provider. The code we pushed yesterday creates the JobDefinitionWr apper and JobStateWrapper for these.
Now, what happens with the session resume code is as follows:
1- The session has to be created. Upon creating the session with a job_list, which is usually a set of all the jobs exposed by the provider, we will have a SessionState object, as well as many JobStateWrapper objects, one for each of the jobs the session knows about. Each one points in turn to a jobDefinitionWr apper (the /plainbox/job/xxxxx objects). object. Resume is called, the native session overwrites the old job_list and updates the job_state_map with a new one, taken from the saved session data. This contains a bunch of JobState objects that are defined in the job_list. However, on the DBus service side, the wrappers for the JobState objects never get created, meaning that the SessionStateWra pper's state map points to the old, and by now deleted, set of JobState wrappers. That's why when trying to load those JobStates you get the "unable to lookup object wrapper" problem.
2- When session_
The solution I came up with is similar to the existing check_and_ wrap_new_ jobs method. Once the resumed session is loaded, we need to look at the native job_list, and wrap and publish any objects that didn't exist before.
The code is somewhat different than check_and_ wrap_new_ jobs, because, while I only need to create JobDefinitionWr appers for jobs that don't already exist, we need to create new JobStateWrappers for *all* the jobs in the job_state_map.
I'll propose a merge request to your branch, this at least runs correctly but I didn't validate all the behavior. Even if you want to implement things a bit differently, this should give you an idea of where to look.
I'll set to "Needs Fixing" but hopefully this information plus the upcoming merge request should help get things working easily.