ReentrantReadWriteLock Usage Scenarios
ReentrantReadWriteLock is a Java read/write lock that allows simultaneous access by multiple read threads, but only one write thread (which blocks all read and write threads). This lock is designed to improve performance, especially if the number of read operations far exceeds the number of write operations.
In concurrency scenarios, to address thread-safety issues, we typically use the keywordsynchronized or a JUC package that implements the Lock interface.ReentrantLock. But they are both exclusive lock acquisitions, which means that only one thread can acquire the lock at the same moment.
In some business scenarios, most of the data is only read data, write data is very little, if only read data will not affect the correctness of the data, and if in this business scenario, the use of exclusive locks, then it is clear that there will be a performance bottleneck. For this read more write less situation, Java provides another implementation of the Lock interface ReentrantReadWriteLock - read and write locks.
ReentrantReadWriteLock
In fact, it'sRead-read concurrency, read-write mutual exclusion, write-write mutual exclusion. If an object has more concurrent read scenarios than concurrent write scenarios, then you can use theReentrantReadWriteLock
to achieve the premise of ensuring thread safety to improve concurrency efficiency. First, let's take a look at the two demos Doug Lea has prepared for us.
CachedData
A use case for cached objects, which are generally used in far more concurrent read scenarios than concurrent write scenarios, so cached objects are ideally suited for use with theReentrantReadWriteLock
commanding
class CachedData {
// the specific object being cached
Object data.
// Whether the current object is available or not, use volatile to ensure visibility.
volatile boolean cacheValid; //ReentrantReadWriteLock; //ReentrantReadWriteLock
//ReentrantReadWriteLock
final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock(); //ReentrantReadWriteLock(); //ReentrantReadWriteLock()
//Business Processing Logic
void processCachedData() {
//To read the data, first add the read lock, if it succeeds, it means that no one is writing concurrently at this time.
().lock();
// After getting the read lock, determine whether the current object is valid or not.
if (!cacheValid) {
// Must release read lock before acquiring write lock
// This is a classic way of handling the fact that you can't directly acquire a write lock once you have a read lock.
// Because write locks are exclusive locks, and if you acquire them directly, the code will deadlock here
// So you have to release the read lock first, and then acquire the write lock manually
().unlock();
().lock();
try {
// Recheck state because another thread might have
// acquired write lock and changed state before we did.
// Classical treatment number two, when data is to be processed inside an exclusive lock, make sure to do a secondary checksum
// because another thread might have // acquired write lock and changed state before we did.
//Thread 1 releases the write lock, thread 2 immediately acquires the write lock, and if you don't do a double check then you may be doing something multiple times.
if (!cacheValid) {
data = ...
// When the cache object is updated successfully, reset the flag to true.
cacheValid = true; }
}
// Downgrade by acquiring read lock before releasing write lock
// Here's the magic of downgrading a lock. Downgrading by acquiring read lock before releasing write lock means that you can acquire read lock again.
//The reason for acquiring the write lock once is to prevent the current thread from releasing the write lock and other threads from acquiring the write lock and changing the cached objects.
//Because read and write are mutually exclusive, with this read lock, no other thread can modify the cache object until the read lock is released.
().lock();
} finally {
().unlock(); // Unlock write, still hold read
}
}
try {
unlock write, still hold read } } try { use(data); // Unlock write, still hold read.
} finally {
().unlock(); }
}
}
}
RWDictionary
Doug Lea gave the second demo, a concurrent container demo. concurrent container we generally use ConcurrentHashMap directly, but we can use non-concurrent safe container + ReentrantReadWriteLock to combine a concurrent container. If the concurrent container of the frequency of read & gt; write frequency, then the efficiency of this is still good!
class RWDictionary {
//The original non-concurrent safe container
private final Map<String, Data> m = new TreeMap<String, Data>();
private final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
private final Lock r = ();
private final Lock w = ();
public Data get(String key) {
//read data,locked in (with door lock locked from the outside)
();
try { return (key); }
finally { (); }
}
public String[] allKeys() {
//read data,locked in (with door lock locked from the outside)
();
try { return ().toArray(); }
finally { (); }
}
public Data put(String key, Data value) {
//write data,overwrite lock
();
try { return (key, value); }
finally { (); }
}
public void clear() {
//write data,overwrite lock
();
try { (); }
finally { (); }
}
}
Characteristics of ReentrantReadWriteLock
Read-write locks allow to be accessed by multiple read threads at the same moment, but all read threads and other write threads are blocked when accessed by a write thread。
When analyzing the mutual exclusivity of WirteLock and ReadLock, we can compare and contrast WriteLock with WriteLock, WriteLock with ReadLock, and ReadLock with ReadLock.
The characteristics of read and write locks are summarized here:
- fairness options: Supports both non-fairness (default) and fair lock acquisitions, with non-fairness outperforming fairness in terms of throughput;
- reentry: Re-entry is supported, read locks can be re-acquired after acquisition, write locks can be re-acquired after acquisition, and read locks can be acquired at the same time;
-
lock downgrade: Write lock demotion is a process that allows a write lock to be converted to a read lock. The usual sequence is:
- Acquiring a write lock: the thread first acquires a write lock to ensure exclusive access when modifying data.
- Acquiring a read lock: while the write lock is held, the thread can acquire the read lock again.
- Release write lock: the thread holds the read lock while releasing the write lock.
- Release read lock: the final thread releases the read lock.
In this way, the write lock is demoted to a read lock, allowing concurrent reads from other threads, but still excluding write operations from other threads.
Next up, a little extra about lock downgrades
- lock downgrade
Lock demotion refers to the demotion of a write lock into a read lock. If the current thread owns the write lock, then releases it, and then finally acquires the read lock, such a segmented process cannot be called lock demotion. Lock demotion is the process of holding the (currently owned) write lock, then acquiring the read lock, and subsequently releasing the (previously owned) write lock.
Next look at an example of lock demotion. Because the data changes infrequently, multiple threads can concurrently process the data. When the data changes, if the current thread senses that the data has changed, it prepares the data, while the other processing threads are blocked until the current thread finishes preparing the data, as shown in the code below:
public void processData() {
(); if (!update) {
if (!update) {
// Must release the read lock first
(); if (!update) { // The read lock must be released first.
// Lock demotion begins when the write lock is acquired.
(); if (!update) { // The read lock must be released first.
try {
if (!update) {
// Prepare the data flow (omitted)
update = true;
}
(update = true; }
} finally {
(); }
}
// Lock demotion complete, write lock demoted to read lock.
}
try {
// Flow of using data (omitted)
} finally {
().
}
}
In the above example, when the data has changed, the update variable (boolean and volatile modifier) is set to false, at this time all the threads accessing the processData() method are able to sense the change, but only one thread is able to acquire the write lock, and the other threads will be blocked on the lock() method for both the read and write locks. After the current thread acquires the write lock to finish preparing the data, it then acquires the read lock and subsequently releases the write lock to complete the lock demotion.
Is read lock acquisition necessary in lock degradation? The answer is yes. The main reason is to ensure the visibility of the data. If the current thread does not acquire the read lock but releases the write lock directly, assuming that another thread (called thread T) acquires the write lock and modifies the data at this moment, the current thread will not be able to sense the data update of thread T. If the current thread acquires the read lock and follows the steps of lock demotion, thread T will be blocked until the current thread uses the data and modifies the data. If the current thread acquires the read lock, i.e., follows the lock demotion step, thread T will be blocked until the current thread uses the data and releases the read lock, then thread T can acquire the write lock for data update.
RentrantReadWriteLock does not support lock escalation (the process of holding a read lock, acquiring a write lock, and finally releasing the read lock). The purpose is also to ensure data visibility, if the read lock has been acquired by multiple threads, and any of them successfully acquires the write lock and updates the data, its updates are not visible to other threads that have acquired the read lock.
ReentrantReadWriteLock Source Code Analysis
Class inheritance relationships
public class ReentrantReadWriteLock implements ReadWriteLock, {}
Description: As you can see, ReentrantReadWriteLock implements the ReadWriteLock interface, which defines the specification for obtaining read locks and write locks, which need to be implemented by the implementing class; at the same time, it also implements the Serializable interface, which means that it can be serialized, as you can see in the source code. ReentrantReadWriteLock implements its own serialization logic.
Internal Classes of Classes
ReentrantReadWriteLock has five internal classes and the five internal classes are also related to each other. The relationship of the internal classes is shown in the following figure.
Description: As shown above, Sync inherits from AQS, NonfairSync inherits from Sync class, FairSync inherits from Sync class; ReadLock implements Lock interface, WriteLock also implements Lock interface.
Internal Classes - Class Sync
- Inheritance of the Sync class
abstract static class Sync extends AbstractQueuedSynchronizer {}
Description: The Sync abstract class inherits from the AQS abstract class. The Sync class provides support for ReentrantReadWriteLock.
- Internal classes of the Sync class
Sync class within the existence of two internal classes, respectively, HoldCounter and ThreadLocalHoldCounter, which HoldCounter is mainly used in conjunction with the read lock, which, HoldCounter source code is as follows.
// register
static final class HoldCounter {
// reckoning
int count = 0;
// Use id, not reference, to avoid garbage retention
// Get the current thread'sTIDThe value of the attribute
final long tid = getThreadId(());
}
Description: HoldCounter has two main properties, count and tid, where count represents the number of times a read thread has been reentered, and tid represents the value of the thread's tid field, which can be used to uniquely identify a thread.The source code of ThreadLocalHoldCounter is as follows
// Local thread counter
static final class ThreadLocalHoldCounter
extends ThreadLocal<HoldCounter> {
// Rewrite the initialization method to get the HoldCounter value if it is not being set
public HoldCounter initialValue() {
return new HoldCounter();
}
}
Description: ThreadLocalHoldCounter overrides the initialValue method of ThreadLocal, the ThreadLocal class that associates threads with objects. In the case of no set, the get is the HolderCounter object generated by the initialValue method.
- Properties of the Sync class
abstract static class Sync extends AbstractQueuedSynchronizer {
// version number
private static final long serialVersionUID = 6317671515068378041L;
// your (honorific)16Bit for read lock,lower (one's head)16Bit for write lock
static final int SHARED_SHIFT = 16;
// Read lock unit
static final int SHARED_UNIT = (1 << SHARED_SHIFT);
// Maximum number of read locks
static final int MAX_COUNT = (1 << SHARED_SHIFT) - 1;
// Maximum number of write locks
static final int EXCLUSIVE_MASK = (1 << SHARED_SHIFT) - 1;
// Local thread counter
private transient ThreadLocalHoldCounter readHolds;
// Cached counters
private transient HoldCounter cachedHoldCounter;
// First read thread
private transient Thread firstReader = null;
// First read thread的计数
private transient int firstReaderHoldCount;
}
Description: This property contains the maximum number of threads to read locks, write locks. Local thread counter, etc.
- Constructor for the Sync class
// Constructor
Sync() {
// Local thread counter
readHolds = new ThreadLocalHoldCounter(); // Set the status of the AQS.
// Set the state of the AQS
setState(getState()); // ensures visibility of readHolds
}
Description: sets the local thread counter and the AQS state state in Sync's constructor.
Class Attributes
public class ReentrantReadWriteLock
implements ReadWriteLock, {
// version number
private static final long serialVersionUID = -6992448646407690164L;
// locked in (with the door locked from the outside)
private final readerLock;
// write lock
private final writerLock;
// synchronized queue
final Sync sync;
private static final UNSAFE;
// threadingIDoffset address
private static final long TID_OFFSET;
static {
try {
UNSAFE = ();
Class<?> tk = ;
// 获取threading的tidMemory address of the field
TID_OFFSET =
(("tid"));
} catch (Exception e) {
throw new Error(e);
}
}
}
Description: You can see that the ReentrantReadWriteLock property consists of an object for a read lock, an object for a write lock, and a Sync object for a synchronized queue.
Class constructor
- ReentrantReadWriteLock() type constructor
public ReentrantReadWriteLock() {
this(false);
}
Description: This constructor calls another parametric constructor.
- ReentrantReadWriteLock(boolean)-type constructor
public ReentrantReadWriteLock(boolean fair) {
// Fair strategy or non-fair strategy
sync = fair ? new FairSync() : new NonfairSync();
// locked in (with the door locked from the outside)
readerLock = new ReadLock(this);
// write lock
writerLock = new WriteLock(this);
}
Description: You can specify whether to set a fair policy or a non-fair policy, and the constructor generates two objects, a read lock and a write lock.
Internal Classes - Sync Core Functions Analysis
Most operations on the ReentrantReadWriteLock object are forwarded to the Sync object for processing. The following is an analysis of the key functions in the Sync class
- sharedCount function
indicates the number of threads occupying the read lock, the source code is as follows
static int sharedCount(int c) { return c >>> SHARED_SHIFT; }
Explanation:: The number of threads reading locks can be obtained by directly shifting state to the right by 16 bits, since the high 16 bits of state indicate the number of read locks and the corresponding low sixteen bits indicate the number of write locks.
- exclusiveCount function
indicates the number of threads occupying the write lock, the source code is as follows
static int exclusiveCount(int c) { return c & EXCLUSIVE_MASK; }
Description:
EXCLUSIVE_MASKFor.
static final int EXCLUSIVE_MASK = (1 << SHARED_SHIFT) - 1;
EXCLUSIVE_MASK is 1, shifted left by 16 bits and then subtracted by 1, that is 0x0000FFFF. exclusiveCount method is to sum the synchronization state (state is of int type) with 0x0000FFFF, that is to say, to take the lower 16 bits of the synchronization state.
So what does the lower 16 bits represent? Based on the comment of the exclusiveCount method as the number of exclusive acquisitions, i.e., the number of times the write lock was acquired, we can now conclude thatThe lower 16 bits of the synchronization status are used to indicate the number of write lock acquisitions.。
Write Lock Acquisition
The write lock of ReentrantReadWriteLock cannot be acquired by more than one thread at the same time. Obviously, the write lock of ReentrantReadWriteLock is an exclusive lock, and the synchronization semantics of the write lock is implemented by rewriting theAQS method of the tryAcquire method in the
- tryAcquire function
protected final boolean tryAcquire(int acquires) {
/*
* Walkthrough:
* 1. If read count nonzero or write count nonzero
* and owner is a different thread, fail.
* 2. If count would saturate, fail. (This can only
* happen if count is already nonzero.)
* 3. Otherwise, this thread is eligible for lock if
* it is either a reentrant acquire or
* queue policy allows it. If so, update state
* and set owner.
*/
// Get current thread
Thread current = ();
// Get Status
int c = getState();
// Number of write threads
int w = exclusiveCount(c);
if (c != 0) { // The state is not0
// (Note: if c != 0 and w == 0 then shared count != 0)
if (w == 0 || current != getExclusiveOwnerThread()) // Number of write threads为0or the current thread is not occupying an exclusive resource
return false;
if (w + exclusiveCount(acquires) > MAX_COUNT) // 判断是否超过最高Number of write threads
throw new Error("Maximum lock count exceeded");
// Reentrant acquire
// set upAQSstate of affairs
setState(c + acquires);
return true;
}
if (writerShouldBlock() ||
!compareAndSetState(c, c + acquires)) // Should write threads be blocked
return false;
// set up独占线程
setExclusiveOwnerThread(current);
return true;
}
Description: This function is used to acquire a write lock: first it will acquire the state, and determine whether it is 0 or not;
1. If it is 0, it means that there is no read lock thread at this time, and then judge whether the write thread should be blocked, and in the non-fair policy is always not blocked, in the fair policy will be judged (to determine whether there is a synchronization queue to wait for a longer time in the thread; if there is, then it needs to be blocked, otherwise, do not need to be blocked), and then set the state of state, and then return to the true.
2. If the state is not 0, it means that at this time there is a read lock or write lock thread, if the number of write lock thread is 0 or the current thread is an exclusive lock thread, then return false, indicating that it is unsuccessful, otherwise, to determine whether the number of re-entry to write lock thread is greater than the maximum value, if so, then throw an exception, otherwise, set the status of state, return true, indicating success. The flowchart of the function is as follows
Its main logic is:When the read lock has been acquired by the read thread or the write lock has been acquired by another write thread, the write lock acquisition fails; otherwise, the acquisition succeeds and reentry is supported to increase the write state.
Write Lock Release
The write lock is released by rewriting theAQS The source code for the tryRelease method is:
- The tryRelease function
/*
* Note that tryRelease and tryAcquire can be called by
* Conditions. So it is possible that their arguments contain
* both read and write holds that are all released during a
* condition wait and re-established in tryAcquire.
*/
protected final boolean tryRelease(int releases) {
// Determine if a thread is pseudo-exclusive
if (!isHeldExclusively())
throw new IllegalMonitorStateException();
// Calculate the number of write locks after releasing resources
int nextc = getState() - releases;
boolean free = exclusiveCount(nextc) == 0; // Whether the release was successful or not
if (free)
setExclusiveOwnerThread(null); // Set Exclusive Threads to Empty
setState(nextc); // Setting state
return free;
}
Description: This function is used to release the write lock resource, first of all, it will determine whether the thread is an exclusive thread, if not, it will throw an exception, otherwise, it will calculate the number of write locks after releasing the resource, if it is 0, it means that it is successfully released and the resource will not be occupied, otherwise, it means that the resource is still occupied. Its function flowchart is as follows.
Read Lock Acquisition
After looking at write locks, let's look at read locks, which are not exclusive locks, i.e., they can be acquired by more than one read thread at the same moment, i.e., they are a type of shared lock. According to the previous description of theAQS The synchronization semantics of shared synchronization components are implemented by overriding the tryAcquireShared and tryReleaseShared methods of AQS. The read lock acquisition is implemented as:
- The tryAcquireShared function
private IllegalMonitorStateException unmatchedUnlockException() {
return new IllegalMonitorStateException(
"attempt to unlock read lock, not locked by current thread");
}
// Access to resources in a shared model
protected final int tryAcquireShared(int unused) {
/*
* Walkthrough:
* 1. If write lock held by another thread, fail.
* 2. Otherwise, this thread is eligible for
* lock wrt state, so ask if it should block
* because of queue policy. If not, try
* to grant by CASing state and updating count.
* Note that step does not check for reentrant
* acquires, which is postponed to full version
* to avoid having to check hold count in
* the more typical non-reentrant case.
* 3. If step 2 fails either because thread
* apparently not eligible or CAS fails or count
* saturated, chain to version with full retry loop.
*/
// Get current thread
Thread current = ();
// Get Status
int c = getState();
if (exclusiveCount(c) != 0 &&
getExclusiveOwnerThread() != current) // The number of write threads is not0and it is not the current thread that occupies the resource
return -1;
// Number of read locks
int r = sharedCount(c);
if (!readerShouldBlock() &&
r < MAX_COUNT &&
compareAndSetState(c, c + SHARED_UNIT)) { // Should read threads be blocked、and less than the maximum value、And the comparison is set up successfully
if (r == 0) { // Number of read locks为0
// Setting up the first read thread
firstReader = current;
// The number of resources occupied by the read thread is1
firstReaderHoldCount = 1;
} else if (firstReader == current) { // The current thread is the first read thread
// Number of resources occupied plus1
firstReaderHoldCount++;
} else { // Number of read locks不为0and not for the current thread
// Get Counter
HoldCounter rh = cachedHoldCounter;
if (rh == null || != getThreadId(current)) // The counter is empty or the counter'stidNot for the currently running thread'stid
// Get current thread对应的计数器
cachedHoldCounter = rh = ();
else if ( == 0) // consider0
// set up
(rh);
++;
}
return 1;
}
return fullTryAcquireShared(current);
}
Description: This function is for reading thread to get read lock. First determine whether the write lock is 0 and the current thread does not occupy the exclusive lock, directly return; otherwise, determine whether the read thread needs to be blocked and the number of read locks is less than the maximum value and compare the success of the set state, if there is no read lock, then set the first read thread firstReader and firstReaderHoldCount; if the current thread is the first read thread, then increase the value of firstReaderHoldCount; otherwise, it will set the value of the corresponding HoldCounter object of the current thread. If the current thread is the first read thread, firstReaderHoldCount will be increased; otherwise, the value of the HoldCounter object corresponding to the current thread will be set. The flowchart is as follows.
Read lock acquisition fails when write lock is acquired by another threadOtherwise the fetch is successful and the synchronization state is updated using CAS.
In addition, the current synchronization state requires the addition of SHARED_UNIT ((1 << SHARED_SHIFT)
The high 16 bits of the synchronization state are used to indicate the number of times the read lock has been acquired, i.e., 0x00010000) for the reasons we described above.
If CAS fails or if a thread that has already acquired a read lock acquires it again, this is accomplished by the fullTryAcquireShared method.
- fullTryAcquireShared function
final int fullTryAcquireShared(Thread current) {
/*
* This code is in part redundant with that in
* tryAcquireShared but is simpler overall by not
* complicating tryAcquireShared with interactions between
* retries and lazily reading hold counts.
*/
HoldCounter rh = null;
for (;;) { // infinite loop
// Get Status
int c = getState();
if (exclusiveCount(c) != 0) { // The number of write threads is not0
if (getExclusiveOwnerThread() != current) // Not for the current thread
return -1;
// else we hold the exclusive lock; blocking here
// would cause deadlock.
} else if (readerShouldBlock()) { // The number of write threads is0and the read thread is blocked
// Make sure we're not acquiring read lock reentrantly
if (firstReader == current) { // The current thread is the first read thread
// assert firstReaderHoldCount > 0;
} else { // The current thread is not the first read thread
if (rh == null) { // Counter not empty
//
rh = cachedHoldCounter;
if (rh == null || != getThreadId(current)) { // The counter is empty or the counter'stidNot for the currently running thread'stid
rh = ();
if ( == 0)
();
}
}
if ( == 0)
return -1;
}
}
if (sharedCount(c) == MAX_COUNT) // Maximum number of read locks,throw an exception
throw new Error("Maximum lock count exceeded");
if (compareAndSetState(c, c + SHARED_UNIT)) { // Compare and set up successfully
if (sharedCount(c) == 0) { // The number of read threads is0
// Setting up the first read thread
firstReader = current;
//
firstReaderHoldCount = 1;
} else if (firstReader == current) {
firstReaderHoldCount++;
} else {
if (rh == null)
rh = cachedHoldCounter;
if (rh == null || != getThreadId(current))
rh = ();
else if ( == 0)
(rh);
++;
cachedHoldCounter = rh; // cache for release
}
return 1;
}
}
}
Description: In the tryAcquireShared function, if the following three conditions are not met (read the thread should be blocked, less than the maximum value, compare the success of the setup) will be carried out fullTryAcquireShared function, which is used to ensure that the operation can be successful. Its logic and tryAcquireShared logic is similar, you can continue to look further.
Read Lock Release
The implementation of read lock release is mainly through the method tryReleaseShared, the source code is as follows, see the comments for the main logic:
- The tryReleaseShared function
protected final boolean tryReleaseShared(int unused) {
// Get the current thread
Thread current = (); if (firstReader == current) { // The current thread is the first reader thread.
if (firstReader == current) { // The current thread is the first reader thread.
// assert firstReaderHoldCount > 0; if (firstReader == current) { // The current thread is the first read thread.
if (firstReaderHoldCount == 1) // the read thread has a resource count of 1
firstReader = null;
else // reduce the number of resources used
firstReaderHoldCount--;; // the current thread is not reading.
} else { // current thread is not the first read thread
// Get the cached counter
HoldCounter rh = cachedHoldCounter; if (rh == null; } else { // Current thread is not the first read thread // Get the cached counter.
if (rh == null || ! = getThreadId(current)) // the counter is empty or the tid of the counter is not the tid of the currently running thread
// Get the counter for the current thread
rh = (); // Get the counter for the current thread.
// Get the counter for the current thread
int count = ;
if (count <= 1) { // count less than or equal to 1
// Remove
(); if (count <= 0)
if (count <= 0) // count less than or equal to 0, throw exception
throw unmatchedUnlockException();
}
// Decrease count
--;;
}
for (;;) { // infinite loop
// Get the state
int c = getState();
// Get the state
int nextc = c - SHARED_UNIT; if (compareAndSetState(c, nextc)) // compare and setState(c, nextc)
if (compareAndSetState(c, nextc)) // compare and set state
// Releasing the read lock has no effect on readers, // but it may allow waiting writers to write.
// but it may allow waiting writers to proceed if
// both read and write locks are now free.
return nextc == 0; }
}
}
Description: This function releases the lock from the read thread. First determine whether the current thread is the first read thread firstReader, if so, then determine the first read thread occupied by the number of resources firstReaderHoldCount is 1, if so, set the first read thread firstReader is empty, otherwise, the first read thread occupied by the number of resources firstReaderHoldCount minus 1; if the current thread is not the first read thread, then first will get the cache counter (the last read lock thread). firstReaderHoldCount minus 1; if the current thread is not the first read thread, then it will first get the cache counter (the counter corresponding to the last read-locked thread ), if the counter is empty or the tid is not equal to the value of the current thread's tid, then it will get the current thread's counter, and if the counter's count is less than or equal to 1, then it will remove the current thread's corresponding If the count of the counter is less than or equal to 1, then remove the counter of the current thread. If the count of the counter is less than or equal to 0, then throw an exception and then reduce the count. In any case, it enters an infinite loop, which ensures that the state is set successfully, as shown in the following flowchart.
lock downgrade
The read and write locks support lock degradation, theA write lock can be demoted to a read lock by following the sequence of acquiring a write lock, acquiring a read lock, and then releasing the write lock.The following example code is taken from the ReentrantWriteReadLock source code, which does not support lock upgrades, and the following example code is taken from the ReentrantWriteReadLock source code:
void processCachedData() {
().lock();
if (!cacheValid) {
// Must release read lock before acquiring write lock
().unlock();
().lock();
try {
// Recheck state because another thread might have
// acquired write lock and changed state before we did.
if (!cacheValid) {
data = ...
cacheValid = true;
}
// Downgrade by acquiring read lock before releasing write lock
().lock();
} finally {
().unlock(); // Unlock write, still hold read
}
}
try {
use(data);
} finally {
().unlock();
}
}
The process here can be explained as follows:
- Acquire read lock: first try to acquire a read lock to check if a cache is valid.
- Check the cache: if the cache is invalid, the read lock needs to be released, as it must be released before a write lock can be acquired.
- Acquire Write Lock: Acquire a write lock in order to update the cache. At this point, it may also be necessary to re-check the cache state, as other threads may have modified the state between releasing the read lock and acquiring the write lock.
- Update Cache: If the cache is confirmed to be invalid, update the cache and mark it as valid.
- Write lock demoted to read lock: demote a write lock to a read lock by acquiring the read lock before releasing the write lock. This way, after releasing the write lock, other threads can read concurrently, but not write.
- Using data: Cached data can now be used safely.
- Release Read Lock: Release the read lock after completing the operation.
This process combines the benefits of read and write locks to ensure data consistency and availability while allowing concurrent reads where possible. Code using read and write locks may seem more complex than using simple mutual exclusion locks, but it provides finer-grained concurrency control that may improve the performance of multithreaded applications
ReadWriteLock and StampedLock
ReadWriteLock
ReadWriteLock is a JavaAn interface provided, full class name:, the above ReentrantReadWriteLock is inherited from this interface. It allows multiple threads to read a shared resource at the same time, but allows only one thread to write to the shared resource. This mechanism improves the concurrency of read operations, but write operations require exclusive access to the resource.
characterization
- Multiple threads can acquire a read lock at the same time, but only one thread can acquire a write lock.
- When a thread holds a write lock, no other thread can acquire a read lock or a write lock; reads and writes are mutually exclusive.
- When a thread holds a read lock, other threads can acquire the read lock at the same time, read-read sharing. However, it is not possible to acquire a write lock, which is mutually exclusive.
Usage Scenarios
ReadWriteLock is suitable for scenarios where there are more reads and fewer writes, such as caching systems, database connection pools, and so on. In these scenarios, read operations take up most of the time and write operations are less.
usage example
A simple cache system is implemented using the ReadWriteLock example:
public class Cache {
private Map<String, Object> data = new HashMap<>();
private ReadWriteLock lock = new ReentrantReadWriteLock();
public Object get(String key) {
().lock();
try {
return (key);
} finally {
().unlock();
}
}
public void put(String key, Object value) {
().lock();
try {
(key, value);
} finally {
().unlock();
}
}
}
In the above example, the Cache class uses ReadWriteLock to implement concurrent access control on data. get method acquires the read lock and reads the data, and put method acquires the write lock and writes the data.
StampedLock
StampedLock is a new locking mechanism introduced in Java 8, full class name:, which provides an optimistic read mechanism that can further improve the concurrency performance of read operations.
characterization
- Similar to ReadWriteLock, StampedLock supports multiple threads acquiring read locks at the same time, but allows only one thread to acquire a write lock.
- Unlike ReadWriteLock, StampedLock also provides an Optimistic Read Lock, which does not block write operations from other threads, but does need to verify the consistency of the data after the read is complete.
Usage Scenarios
StampedLock is suitable for scenarios where reads are much larger than writes and do not require high data consistency, e.g., statistical data, monitoring systems.
usage example
An example implementation of a counter using StampedLock:
public class Counter {
private int count = 0;
private StampedLock lock = new StampedLock();
public int getCount() {
long stamp = ();
int value = count;
if (!(stamp)) {
stamp = ();
try {
value = count;
} finally {
(stamp);
}
}
return value;
}
public void increment() {
long stamp = ();
try {
count++;
} finally {
(stamp);
}
}
}
In the above example, the Counter class uses StampedLock to implement concurrent access control to the counter. getCount method first tries to acquire an optimistic read lock and reads the value of the counter, and then verifies the consistency of the data through the validate method. if the validation fails, it acquires a pessimistic read lock and re-reads the value of the counter. If the validation fails, the pessimistic read lock is acquired and the counter is re-read. increment method acquires the write lock and increments the counter.
wrap-up
ReadWriteLock and StampedLock are both important mechanisms for concurrency control in Java.
- ReadWriteLock for read-many-write-low scenarios.
- StampedLock is suitable for scenarios where reads are much larger than writes, and where data consistency is not required.
In practice, we need to choose the appropriate locking mechanism according to the specific scenario. By using these locking mechanisms appropriately, we can improve the performance and reliability of concurrent programs.