Assuming the program is well synchronized, it means
that when processor A is releasing the mutex:
1. (write) memory barrier. need to be done anyway.
2. skip the current page (~4KB). the GC will compact
When B acquires that mutex:
1. (read) memory barrier. need to be done anyway.
2. update the virtual page table (but NO trap!). I am
no expert, but it seems to me that it shouldn't take
more than the time it takes to allocate a new page
from the OS.
If the programs are not well synchronized, it seems
Anyway, don't forget the benefit: there is no
conditional jump before every access to every object
(which exists at least in one implementation I've read
--- Eliot Moss <email@example.com> wrote:
> Still, if processor A is creating new objects and
> sticking them into a work
> queue for processor B, and the queue is getting
> short, looks like
> synchronization on pretty much every object creation
> to me ....
> I'm not saying your scheme won't work (other people
> may judge that), only
> that it seems pretty heavy-weight.
> -- E
Doron Rajwan, mailto:firstname.lastname@example.org
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:51 EDT