summarize
The following considerations need to be taken into account when applying the Sychronized keyword:
-
A lock can only be acquired by one thread at a time, and threads that don't acquire the lock can only wait;
-
Each instance corresponds to a lock (this), different instances do not affect each other; exceptions: the lock object is *.class and synchronized modification of the static method, all objects share the same lock
-
A method modified by synchronized releases the lock regardless of whether the method completes execution normally or throws an exception.
Range of locks
-
For normal synchronization methods, the lock is the current instance object
-
For static synchronization methods, the lock is the CIass object of the current class
-
For synchronized method blocks, the lock is the object configured in Synchonized brackets.
underlying implementation
Lock Release Principle
synchronized is Java's built-in synchronization mechanism, so it's also known as Intrinsic Locking, which provides the semantics and visibility of mutual exclusion, so that when a thread has already acquired the current lock, any other thread that tries to acquire the lock can only wait or block there.
synchronized is implemented based on a pair of monitorenter/monitorexit directives. The Monitor object is the basic implementation of synchronization, both explicitly and implicitly. The difference is that synchronized blocks are implemented by explicit monitorenter and monitorexit directives, while synchronized methods are implemented implicitly by the ACC_SYNCHRONIZED flag.
synchronous code block
public class Test1 {
public void fun1(){
synchronized (this){
("fun111111111111");
}
}
}
Compile .java file to .class file using javac command and then decompile the class file. Intercept the decompiled bytecode file:
Through the decompiled content view can be found, synchronized compiled, synchronization block before and after the monitorenter/monitorexit two byte code instructions. The Java Virtual Machine specification describes the role of the two instructions: a translation is as follows:
Each object has a monitor lock. The monitor is in a locked state when it is occupied, and the thread tries to acquire ownership of the monitor when it executes the monitorenter instruction, as follows:
- If monitor's entry count is 0, the thread enters the monitor, and then sets the entry count to 1, the thread is the owner of the monitor.
- If the thread is already in possession of the monitor and is just re-entering, the entry number into the monitor is added to 1.
- If another thread has already taken ownership of the monitor, the thread goes into a blocking state until the monitor's entry count is 0, and then it retries to take ownership of the monitor again
monitorexit:
- The thread executing monitorexit must be the owner of the monitor corresponding to objectref.
- When the instruction is executed, the number of entries in the monitor is reduced by 1. If the number of entries is 0 after the reduction by 1, then the thread exits the monitor and is no longer the owner of the monitor. Other threads blocked by the monitor can try to get the ownership of the monitor.
Q: Does an exception within a synchronized block release the lock?
A: The lock will be released automatically, as can be seen by looking at the bytecode instructions, monitorexit is inserted at the end of the method (line 13) and at the exception (line 19). It can also be seen in the Exception table.
Synchronized method code
public class Test1 {
//Lock the current object(this)
public synchronized void fun2(){
("fun2222222222222222222222");
}
//state of not workingsynchronizedqualify or modify (grammar):The lock object used is the current class'classboyfriend
public synchronized static void fun3(){
("fun33333333333333");
}
}
Screenshot of decompilation after compilation:
From the result of decompilation, the synchronized method is not completed by monitorenter/monitorexit instruction on the surface, but compared with the ordinary method, there is an extra ACC_SYNCHRONIZED identifier in the constant pool. java virtual machine is based on the ACC_SYNCHRONIZED identifier to realize the synchronization of the method when the When calling a method, the calling instruction first checks whether the method has the ACC_SYNCHRONIZED access flag, if it does, the executing thread will first get the monitor, and then execute the body of the method after it succeeds, and then release the monitor after it finishes executing. during the execution period of the method, other threads can't get the same monitor object again. Although the compiled results look different, there is no essential difference, except that the synchronization of the method is achieved implicitly, without the need to be done by bytecode.
The ACC_SYNCHRONIZED access flag, in fact, stands for: when the thread executes to the method, if it detects the presence of this access flag will implicitly go to call the monitorenter/monitorexit two commands to lock the method.
wrap-up
The synchronized block is implemented using the monitorenter and monitorexit directives, where the monitorenter directive points to the beginning of the synchronized block and the monitorexit directive indicates the end of the block. When the monitorenter instruction is executed, the thread attempts to acquire a lock, that is, the right to hold the monitor (the monitor object exists in the header of every Java object, and synchronized locks are acquired this way, which is why any object in Java can be used as a lock).
It contains a counter, when the counter is 0, then it can be successfully acquired, after the acquisition of the lock counter will be set to 1, that is, plus 1. Accordingly, after the execution of the monitorexit instruction, the lock counter will be set to 0, indicating that the lock is released. If it fails to acquire the object lock, the current thread must block and wait until the lock is released by another thread.
synchronized method does not have the monitorenter instruction or the monitorexit instruction, but instead has the ACC_SYNCHRONIZED flag, which specifies that the method is a synchronized method, and the JVM uses the ACC_SYNCHRONIZED access flag to identify whether a method is declared as a synchronized method. JVM uses the ACC_SYNCHRONIZED access flag to identify whether a method is declared synchronous, and to perform the corresponding synchronization call.
Re-entry locking principle
Both ReentrantLock and synchronized are reentrant locks.
define
means that the same thread can acquire the same lock multiple times (a thread can be synchronized multiple times and acquire the same lock repeatedly).
/* reentrant property refer to After the same thread acquires a lock,The lock can be acquired again。*/
public class Demo01 {
public static void main(String[] args) {
Runnable sellTicket = new Runnable() {
@Override
public void run() {
synchronized () {
("I am.run");
test01();
}
}
public void test01() {
synchronized () {
("I am.test01");
}
}
};
new Thread(sellTicket).start();
new Thread(sellTicket).start();
}
}
principle
The synchronized lock object has a counter (recursions variable) that keeps track of how many times the thread has acquired the lock. Every time it reenters, the counter is +1, and at the end of the execution of a synchronized block of code, the counter is decremented by 1, and the lock is not released until the number of counts in the counter is zero.
-
Execute monitorenter to acquire the lock :
-
(monitor counter = 0 to acquire lock)
-
Execute method1() method, monitor counter +1 -> 1 (lock acquired)
-
Execute method2() method, monitor counter +1 -> 2
-
Execute method3() method, monitor counter +1 -> 3
-
-
Execute the monitorexit command :
-
method3() method executed, monitor counter -1 -> 2
-
method2() method executed, monitor counter -1 -> 1
-
method2() method executed, monitor counter -1 -> 0 (lock released)
-
(monitor counter = 0, lock released)
-
vantage
Can somewhat avoid deadlocks (if you can't reenter, then you can't enter this synchronization code block again, leading to deadlocks); better encapsulation of code (you can write the synchronization code block into a method, and then call the method directly in another synchronization code block to achieve reentrancy);
Guaranteed Visibility Principle (GVP)
This one lies mainly in the memory model and the hapens-before rule
Synchronized's happens-before rule, the monitor locking rule: unlocking of the same monitor happens-before the locking of that monitor.
public class MonitorDemo {
private int a = 0;
public synchronized void writer() { // 1
a++; // 2
} // 3
public synchronized void reader() { // 4
int i = a; // 5
} // 6
}
The hapens-before relationship for this code is shown in the figure:
In the graph each arrow connecting two nodes represents the happens-before relationship between them, the black one is deduced from the program order rule, the red one is deduced from the monitor lock rule: thread A releases the locks happens-before thread B adds the locks, and the blue one is deduced from the program order rule and the monitor lock rule to deduce the happens-before relationship, and the blue one is further deduced from the transitive rule. befor relationship, further deduced from the pass-through rule as the hazards-before relationship.
Here is 2 happens-before 5, by this relation it can be concluded that: according to one of the definition of happens-before: if A happens-before B, then the result of A's execution is visible to B and A's execution order precedes that of B. Thread A first adds one to the shared variable A. By the 2 happens-before 5 relationship, the result of thread A's execution is visible to thread B, i.e., the value of a read by thread B is 1.
object layout
The layout of objects stored in memory in the HotSpot VM can be divided into three areas: the object header (Header), instance data (Instance Data), and alignment padding (Padding).
Java object header
Hotspot's object header consists of three main pieces of data: Mark Word (mark field), Klass Pointer (type pointer), array length
Where Mark Word is used to store the object's own runtime data, such as HashCode, GC generation age,Lock status flag, thread holding lock, biased thread ID, biased timestampsand other information.
Mark Word
For example, the hash code, the age to which the object belongs, the object lock, the lock status flag, the bias lock (thread) ID, the bias time, the array length (array object), etc. The Java object header generally occupies 2 machine codes (in a 64-bit virtual machine, 1 machine code is 8 bytes, or 64bit).
Optimization of synchronized locks
The monitorenter and monitorexit bytecode in the JVM relies on the underlying operating system's Mutex Lock, but since using Mutex Lock requires hanging the current thread and switching from user state to kernel state to execute it, this switching is very costly; however, in most cases in reality, the synchronization method is run In a single-threaded environment (lock-free contention), calling Mutex Lock every time would have a serious impact on the performance of the program.So in jdk1.6 on the implementation of locks introduced a large number of optimizations, such as lock coarsening (Lock Coarsening), lock elimination (Lock Elimination), lightweight locking (Lightweight Locking), biased locking (Biased Locking), adaptive spin (Adaptive Spinning) and other techniques to reduce the overhead of locking operations. Spinning) to reduce the overhead of locking operations.。
-
Lock Coarsening (Lock Coarsening): that is, to reduce the unnecessary tightly linked unlock, lock operations, multiple consecutive locks will be expanded into a larger range of locks.
-
Lock Elimination (Lock Elimination): through the runtime JIT compiler escape analysis to eliminate some of the data is not in the current synchronization block outside of the other threads to share the lock protection, through the escape analysis can also be threaded on the Stack object space allocation (but also reduces the Heap on the garbage collection overhead).
-
Lightweight Locking (Lightweight Locking): This locking implementation is based on the assumption that in the real world most of the synchronization code in our programs is generally in a lock-free contention (i.e., a single-threaded execution environment), and that in the absence of lock contention it is entirely possible to avoid invoking the heavyweight mutex locks at the operating system level and instead rely on a single CAS atomic instruction to acquire and release the locks in the monitorenter and monitorexit rely on a single CAS atomic instruction to acquire and release locks. In the presence of lock contention, a thread that fails to execute a CAS instruction will invoke the OS mutex lock and enter a blocking state, where it will be woken up when the lock is released.
-
Biased Locking: is designed to avoid unnecessary CAS atomic instructions during lock acquisition in the absence of lock contention, because CAS atomic instructions, although relatively small compared to the overhead of heavyweight locks, still have a very significant local latency.
-
Adaptive Spinning: When a thread fails to perform a CAS operation in the process of acquiring a lightweight lock, it will enter a busy waiting (Spinning) before entering the OS heavyweight lock (mutex semaphore) associated with the monitor and then try again, and if still unsuccessful after a certain number of tries, it will Call the semaphore associated with the monitor (i.e., the mutex lock) into a blocking state.
Monitor
Monitor can be understood as a synchronization tool or a synchronization mechanism, often described as an object. Every Java object has an invisible lock called an internal lock or Monitor lock.
What is the basic structure of Monitor?
-
Owner field: initially NULL means that no thread currently owns the monitor record, when the thread succeeds in owning the lock to save the thread's unique identity, when the lock is released and set to NULL.
-
EntryQ field: associated with a system mutex lock (semaphore) that blocks all threads that fail in their attempts to lock the monitor record.
-
RcThis field: indicates the number of threads blocked or waiting on the monitor record.
-
Nest field: used to implement reentrant lock counting
-
HashCode field: holds the HashCode value copied from the object header (may also contain GC age)
-
Candidate field: used to avoid unnecessary wakeups of blocking or waiting threads, since only one thread can successfully own the lock at a time, and if each time the previous thread releasing the lock wakes up all the threads that are blocking or waiting, this can cause unnecessary context switches (from blocking to ready and then blocking again because of a failed contention for the lock), leading to a severe performance degradation; Candidate has only two possible values: 0 means that no threads need to be woken up, and 1 means that a successor thread has to be woken up to contend for the lock.
- At first, the Owner in Monitor is null.
- When Thread-2 performs Synchronized(obj) it sets the owner owner of the Monitor to Thread-2. there can be only one Owner in the Monitor.
- While Thread-2 is locked, if Thread-3, Thread-4, and Thread-5 also come to execute Synchronized(obj), it goes into EntryList Blocked.
- Thread-2 executes the contents of the synchronization block and then wakes up the waiting threads in the EntryList to compete for the lock, which is non-fair.
- Thread-0 and Thread-1 in the WaitSet are threads that have previously acquired a lock, but the condition was not met and entered the waiting state.
The monitor is a thread-private data structure. Each thread has a list of available monitor records, as well as a global available list. Each locked object is associated with a monitor, and the monitor has an Owner field that holds the unique identifier of the thread that owns the lock, indicating that the lock is occupied by that thread.
synchronized thread synchronization is achieved through Monitor, which relies on the underlying operating system Mutex Lock to synchronize threads. This dependence on the operating system Mutex Lock is called "heavyweight locks", so in order to reduce the performance consumption of acquiring and releasing locks, "biased locks" and "lightweight locks" have been introduced. locks" were introduced to minimize the performance consumption of acquiring and releasing locks.
Types of locks
In Java SE 1.6 Synchronied synchronization locks, there are four states: no locks, biased locks, lightweight locks, heavyweight locks, it will gradually upgrade with the competition. Locks can be upgraded but not downgraded, in order to provide efficiency in acquiring and releasing locks.
Lock upgrade process: no locks → biased locks → lightweight locks → heavyweight locks (this process is irreversible)
synchronized is a pessimistic lock, in the operation of synchronized resources before you need to give the synchronized resource first lock, this lock is present in the Java object header.
Mark Word contents corresponding to the four lock states:
lock coarsening
In principle, we all know that when adding a synchronization lock, we limit the scope of the synchronization block to as small a range as possible (only synchronize in the actual scope of the shared data, so as to make the number of operations to be synchronized as small as possible. This is to make the number of operations that need to be synchronized as small as possible. In the presence of lock synchronization contention, it also allows the thread waiting for the lock to get the lock as early as possible).
Most of the above scenarios are perfectly correct, but if there is a series of operations that repeatedly lock and unlock the same object, or even if the locking operation occurs in a loop body, then even if there is no thread contention, frequent mutex synchronization operations can lead to unnecessary performance operations.
Here is a sample java class based on the above Javap compilation.
public static String test04(String s1, String s2, String s3) {
StringBuffer sb = new StringBuffer();
(s1);
(s2);
(s3);
return ();
}
This is the case in the successive append() operations described above, where the JVM detects that such a sequence of operations is locking the same object, and extends (coarsens) the locking synchronization to the outside of the entire series of operations, so that the entire sequence of append() operations only needs to be locked once.
lock elimination
Lock elimination refers to the elimination of locks that require synchronization in the code but are detected as unlikely to be contending for shared data by the virtual machine's on-the-fly compiler at runtime. The main basis for determining lock elimination comes fromEscape analysisThe data support. Meaning: JVM will determine another section of the program in the synchronization of the obvious will not escape out of the program so that other threads to access, then the JVM will treat them as stack data, that is, these data are thread-exclusive, do not need to add synchronization. At this point, the lock will be eliminated.
Of course, in the actual development, we know very well which are thread-exclusive and do not need to add synchronization locks, but in the Java API there are many methods are added synchronization, then at this point the JVM will determine whether this code needs to be locked. If the data does not escape, then the lock will be eliminated. For example, the following operation: in the operation of the String type data, because String is an immutable class, the string concatenation operation is always carried out through the generation of a new String object. Therefore, the Javac compiler will automatically optimize String concatenation. Before JDK 1.5 the successive append() operation of the StringBuffer object will be used, which will be converted to the successive append() operation of the StringBuidler object in JDK 1.5 and later versions.
public static String test03(String s1, String s2, String s3) {
String s = s1 + s2 + s3;
return s;
}
The above code compiles with javap
As we all know, StringBuilder is not safe synchronization, but in the above code, the JVM to determine that the code will not escape, the code with the default thread-exclusive resources, do not need to be synchronized, so the implementation of the lock elimination operation. (There are also a variety of operations in the Vector can also be achieved lock elimination. In the absence of escape out of the data security defenses within)
bias lock
In layman's terms, biased locking means that the lock of an object is biased towards a certain thread during runtime. That is, in the case of biased locking mechanism is turned on, a thread to obtain the lock, when the thread next time you want to obtain the lock, do not need to obtain the lock (i.e., ignoring the synchronized keyword), directly on the execution of synchronized code, more suitable for the case of less competition.
To solve this problem, the authors of HotSpot optimized Synchronized in Java SE 1.6 by introducing biased locks. When a thread accesses a synchronized block and acquires a lock, it stores the thread ID of the lock bias in the lock record in the object header and stack frame, and the thread does not need to perform CAS operations to add and unlock locks when it enters and exits the synchronized block in the future. Simply test to see if the Mark Word in the object header stores the biased lock for the current thread. If it succeeds, the thread has acquired the lock.
Bias Lock Acquisition Process
- Check the identification of the biased lock and the lock flag bit in the Mark Word. If whether the biased lock is 1 and the lock flag bit is 01, the lock is in a biasable state.
- If it is a biasable state, then test whether the thread ID in Mark Word is the same as the current thread, if it is the same, then execute the synchronization code directly, otherwise go to the next step.
- The current thread competes for the lock through CAS operation, if the competition succeeds, then the thread ID in Mark Word is set to the current thread ID, and then the synchronization code is executed, if the competition fails, go to the next step.
- A failure of the current thread to contend for the lock via CAS indicates that there is contention. When the global safety point is reached the thread that previously acquired the biased lock is hung, the biased lock is upgraded to a lightweight lock, and then the thread that was blocked at the safety point continues to execute synchronization code down the line.
Bias lock release process:
A biased lock will only be released by the thread holding the biased lock state if it encounters another thread trying to compete for the biased lock; the thread will not actively try to release the biased lock. A bias lock is revoked by waiting for a global safety point (at which no bytecode is being executed), which first suspends the thread holding the bias lock and determines whether the lock object is in a locked state. After the bias lock is revoked, the thread will revert to no lock (flag bit "01") or upgrade to a lightweight lock (flag bit "00").
Lightweight Locks
The lightweight locks introduced after JDK 1.6 are important to noteLightweight locks are not a replacement for heavyweight locksInstead, it is an optimization proposed for the fact that in most cases synchronization blocks do not have contention present. It reduces the thread overhead associated with blocking of threads by heavyweight locks. This improves concurrency performance.
But when multiple threads compete for locks at the same time, lightweight locks can balloon into heavyweight locks.
Lightweight locking plus locking
- When a thread executes code into a synchronization block, if the Mark Word is lock-free, the virtual machine first creates a space called Lock Record in the current thread's stack frame, which is used to store a copy of the Mark Word of the current object, and is officially called "Dispalced Mark Word. "
- Copy the Mark Word in the object header to the lock record.
- After successful copying, the virtual machine will update the Mark Word of the object to the pointer of the executing Lock Record with a CAS operation and point the owner pointer in the Lock Record to the Mark Word of the object. if the update is successful, execute 4, otherwise execute 5;
- If the update is successful, this thread owns the lock and sets the lock flag to 00, indicating that it is in the lightweight lock state
- If the update fails, the VM will check if the Mark Word of the object points to the stack frame of the current thread, if so it means that the current thread already owns the lock and can go into executing the synchronization code. Otherwise, it means that multiple threads are competing for the lock, and the lightweight lock will be inflated to a heavyweight lock, with the pointer to the heavyweight lock (mutual exclusion lock) stored in the Mark Word, and the threads waiting for the lock behind it will have to enter a blocking state as well.
Lightweight lock unlocking
Replaces the Displaced Mark Word back into the object header using an atomic CAS operation, which, if successful, indicates that no contention has occurred. If it fails, it indicates that a contention exists for the current lock. The lock then expands into a heavyweight lock.
Advantages and disadvantages of locks
padlock | vantage | drawbacks | Usage Scenarios |
---|---|---|---|
bias lock | Locking and unlocking require no CAS operations, no additional performance consumption, and only a nanosecond difference compared to performing asynchronous methods | If there is a lock contention between threads, this introduces additional consumption of lock revocations | For scenarios where only one thread accesses the synchronization block |
Lightweight Locks | Competing threads don't block, improving responsiveness | If threads consistently fail to get lock contention, using spin consumes CPU performance | Response time is pursued and synchronization blocks are executed very fast |
Heavyweight Locks | Thread competition does not apply spin and does not consume CPU | Thread blocking, slow response time, and frequent lock acquiring and releasing in multi-threaded situations can result in significant performance consumption | Pursuing throughput, synchronized blocks execute at longer speeds |
Lock Upgrade Process
- No locks: no competition
- Bias locks: only one thread accesses it for a long period of time, a bias lock is acquired
- Lightweight locks: When the current lock is a biased lock and is accessed by another thread, the biased lock is upgraded to a lightweight lock.
- Heavyweight lock: If there is only one waiting thread currently, that thread waits by spinning. But when the spin exceeds a certain number of times, or one thread is holding the lock, one is spinning, and a third is visiting, the lightweight lock is upgraded to a heavyweight lock.
Synchronized vs ReentrantLock
Defects of synchronized
-
Efficiency: fewer lock releases, only when code execution is complete or an exception is thrown; you can't set a timeout when trying to acquire a lock, and you can't interrupt a thread that's using a lock; in contrast, a Lock can be interrupted and set a timeout
-
Not flexible enough: single timing for adding and releasing locks, each lock has only a single condition (a certain object), relatively speaking, read/write locks are more flexible
-
There is no way to know if a lock has been successfully acquired, as opposed to a Lock that can get the status
-
synchronized cannot be interrupted
Lock addresses the corresponding issues
Without explaining too much about the Lock class, let's look at the four methods in it.
-
lock()
-
unlock()
-
tryLock()
-
tryLock(long,TimeUtil)
Synchronized locking is associated with only one condition (whether or not to acquire the lock), which is not flexible, and later the combination of Condition and Lock solved this problem.
When multiple threads compete for a lock, the remaining unlocked threads can only keep trying to acquire the lock without interrupting. This can lead to performance degradation in cases of high concurrency, and ReentrantLock's lockInterruptibly() method can be prioritized to respond to interrupts. A thread that has waited too long can interrupt itself, and then the ReentrantLock responds to the interrupt by not keeping the thread waiting any longer. With this mechanism, there is no deadlock when using ReentrantLock as with synchronized.
ReentrantLock is the commonly used class for a reentrant mutually exclusive lock Lock, which has some of the same basic behavior and semantics as the implicit monitor locks accessed using synchronized methods and statements, but is much more powerful.
synchronized is implemented through software (JVM), simple and easy to use, even after JDK5 with Lock, is still widely used.
Notes on the use of Synchronized
- The lock object can't be null because the lock information is stored in the object header
- Scope should not be too large, affecting the speed of program execution, the scope of control is too large, writing code is also prone to errors
- Avoiding Deadlocks
- Where you can choose, use neither the Lock nor the synchronized keyword, use the wide variety of classes in the package, and if you don't use a class under that package, use the synchronized key if it satisfies the business, because the amount of code is small and to avoid errors
About the Author.
From the first-line programmer Seven's exploration and practice, continuous learning iteration in the~
This article is included in my personal blog:https://
Public number: seven97, welcome to follow~