I've been reading up on the Java Virtual Machine Instruction Set and noticed that when using instructions to invoke methods (e.g. invokestatic, invokevirtual, etc.) that are marked synchronized, it's up to that particular bytecode instruction to acquire the monitor on the receiver object. Similarly, when returning from a method, it's up to the instruction that leaves the method to release the monitor when the method is synchronized. This seems strange, given that there are explicit monitorenter and monitorexit bytecodes for managing monitors. Is there a particular reason for the JVM designing these instructions this way, rather than just compiling the methods to include the monitorenter and monitorexit instructions where appropriate?
相关问题
- Delete Messages from a Topic in Apache Kafka
- Jackson Deserialization not calling deserialize on
- How to maintain order of key-value in DataFrame sa
- StackExchange API - Deserialize Date in JSON Respo
- Difference between Types.INTEGER and Types.NULL in
Are you asking why are there two ways of doing the same thing?
When a method is market as synchronized, it would be redundant to also have monitorenter/exit instructions. If it only had monitorenter/exit instruction you would not bet able to see externally that the method is synchronized (without reading the actual code)
There are more than a few examples of two or more ways of doing the same thing. Each has relative strengths and weaknesses. (e.g. many of the single byte instructions are short versions of a two byte instruction)
EDIT: I must be missing something in the question because the caller doesn't need to know if the callee is synchronized
produces the code
Back in the mid-90s, there were no Java JIT compilers and micro-synchronisation was thought to be a really great idea.
So you are calling these synchronised method a lot. Even
Vector
has 'em! You could deal without the extra bytecodes to interpret.But not just when the code is being run. The class file is bigger. Extra instructions, but also setting up the
try
/finally
tables and verification that something naughty hasn't been slipped in.Just my guess.
Was searching on the same question, and came across the following article. Looks like method level synchronization generates slightly more efficient byte code than block level synchronization. For block level synchronization, explicit byte code is generated to handle exceptions which is not done for method level synchronization. So a possible answer could be, these two ways are used to make method level synchronization slightly faster.
http://www.ibm.com/developerworks/ibm/library/it-haggar_bytecode/
Are you asking why synchronised methods use explicit monitor entry and exit instruction when the JVM could infer them by looking at the method's attributes?
I would guess it is because, as well as methods it is possible to synchronise arbitrary blocks of code:
Therefore it makes sense to use the same instructions for both synchronised methods and synchronised blocks.
<speculation>
By delegating lock management onto the caller, some optimizations are now possible. For example, suppose you have a class like this:
And suppose it is used by this caller class:
Based on escape analysis, the JVM knows that
foo
never escapes the thread. Because of this, it can avoid the implicitMONITORENTER
andMONITOREXIT
instructions.Avoiding unnecessary locks may have been more performance-driven in earlier days of the JVM when speed was a rare commodity.
</speculation>