> Should we take the following lesson from all of these statistics?
First note that none of these benchmarks involve AWT, Swing,
JavaBeans, image-loading, Servlets, EJB, RMI, jini, or other extremely
commonly used packages that generate at least some concurrency, and so
might not be representative of perhaps the majority of java
applications. (They might be, but we cannot tell from these data
alone.) This is a problem for existing Java benchmarks more
> * For most programs, including many multithreaded benchmarks,
> 99% of the synchronization cost involves useless locks
> (e.g., locks that are not being used to synchronize threads).
> Thus, the big issue is not how to use faster techniques when
> we need to communication between threads, but to avoid
> paying the cost for synchronization when we aren't.
Well, I'd say there are lots of big issues, ranging from those
geared for mostly-single-threaded programs to those for explicitly
concurrent or parallel programs.
* Avoiding paying for synch when it is not needed. There's a
pressing need to somehow tame approaches to escape analysis (like
the 4 presented at OOPSLA) and turn them into something that cam be
cheaply, routinely done, maybe even in a JIT. I have no idea
whether this will ever be possible in the general case, but
cheap approximations might be approachable.
* Further extensions of these analyses, such as lock coarsening
(as in Martin Rinard's (pre-Java) work).
* Continuing to reduce cost of lock, unlock, wait, notify/All, interrupt.
* Providing performance-sensitive concurrency constructs (and a memory
model that supports them) so people can tune code.
* Discovering cheap but sound Java-level designs, classes, and utilities for
common concurrency problems. (This is what I mainly do, but as
mentioned a few times before, these might sometimes impact
* Providing other VM enhancements that reduce overhead of concurrency
(wrt Threads, GC, etc.) and/or exploit multiprocessors.
* Improving performance of IO, networking, and related libraries in both
sequential and concurrent programs.
* Teaching people to make better choices (for example HashMap vs
Hashtable) among library classes.
* Providing tools to make sure that manual performance tunings do
not introduce safety violations.
And surely other things as well.
As David Bacon said, there's plenty of work for all sorts of people in
research, advanced development, and production, and a receptive
audience for any useful results!
(While I suppose all this is tangential to the alleged charter of this
list, it is worth putting memory model issues in perspective wrt other
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:22 EDT