Well done on tracking down the problem and thanks for the patch. Aren't threading problems fun? :-)
My main concern with the proposed change is that I'd prefer the Cache object to have the intelligence to Do The Right Thing wrt locking, rather than it being left up to the client to guard access to it. As it stands, we're deciding to use an LRU cache in apps/branch.py and paying the consequences through increased code complexity several layers deeper. What do you think about having a decorator class that provides the synchronized access? E.g. changing the cache creation line to be something like ...
and introducing ThreadSafeCache() as the layer with the RLock?
OTOH, this bug is critical and perhaps we ought to just land it as is, particularly as John is looking to rework the caching architecture Real Soon Now anyhow.
Max,
Well done on tracking down the problem and thanks for the patch. Aren't threading problems fun? :-)
My main concern with the proposed change is that I'd prefer the Cache object to have the intelligence to Do The Right Thing wrt locking, rather than it being left up to the client to guard access to it. As it stands, we're deciding to use an LRU cache in apps/branch.py and paying the consequences through increased code complexity several layers deeper. What do you think about having a decorator class that provides the synchronized access? E.g. changing the cache creation line to be something like ...
graph_cache = ThreadSafeCache (bzrlib. lru_cache. LRUCache( 10))
and introducing ThreadSafeCache() as the layer with the RLock?
OTOH, this bug is critical and perhaps we ought to just land it as is, particularly as John is looking to rework the caching architecture Real Soon Now anyhow.
Any second opinions?