>Interested parties (i.e., you all) should get their comments in! This
>should be as airtight a document as possible.
Probably like whispering in the Grand Canyon at this time, and I suspect
that these points have been made before. Anyway...
The reachability model is so weak that it makes any sort of reachability
management well nigh impossible. Indeed, in the presence of Reference
objects, the definition of reachability can be undecidable. The
specification says 'A reachable object is any object that can be accessed
in any potential continuing computation from any live thread."
I appreciate that this is little changed, if at all, from the original JLS.
All the same, consider
public class Test35
WeakHashMap whm = new WeakHashMap();
public Long create()
Long k = new Long(1234);
for(Iterator it = whm.keySet().iterator(); it.hasNext();)
Long kTemp = (Long) it.next();
public static void main(String args)
new Test35().create() != null
The naive view is that this program always outputs true, because the key
stored in the WeakHashMap is always returned. However, optimisation may
cause the reference stored in k to be lost immediately after the call to
put(), since k is never used again. So, at that point is the object
reachable or not?
If it is, then a strong reference will be created later on, and returned to
the caller. If it is not, then it can be reclaimed, and null is returned to
the caller. So its subsequent use in the program is determined by whether
it is reachable after the call to put. But according to the definition of
reachability, it is reachable after the call to put, if it can subsequently
Where there are finalizers, I think the same undecidability issues arise.
Consider FileOutputStream which was discussed very recently in this forum.
Its finalizer closes its associated channel. But the channel's use of
methods of FileOutputStream is only allowed if channel has not been closed.
In this scenario, even synchronizing the finalizer and all its methods
doesn't resolve the undecidability.
On the more general topic of reachability management, the API specification
of SoftReference includes the observation:
"Thus a sophisticated cache can, for example, prevent its most recently
used entries from being discarded by keeping strong referents to those
What does this mean? It appears that a smart enough optimizer can negate
any attempt I make to retain a strong reference, on the grounds that doing
so does not affect the functionality of the program. But I can hardly
implement a cache that affects the functionality of the program, because
that defeats the object.
The specification keeps life exciting for compiler writers and memory
architecture designers, but it's a real pain for those of us who have to
try to use very average developers to write correct software.
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:44 EDT