Location>code7788 >text

Various thread synchronization locks in .NET

Popularity:558 ℃/2024-08-18 10:52:38

Programming programmed for a long time, will always encounter multi-threaded situations, some times we want several threads to cooperate to complete certain functions, this time you can define a global object, the various threads according to the state of this object to work together, this is the basicthread synchronization

Languages that support multi-threaded programming generally have built-in types and methods for creating the global objects described above, i.e., thelock object (computing), they serve similar purposes and are used in different scenarios..NetThere are so many of these things that I don't think anyone would be able to fully memorize their usage and differences if they weren't used so often. For ease of reference, they are recorded here.

ps: Although this article focuses on the .Net platform, most of the locking concepts covered are platform-agnostic, and in many other languages (such as_Java_____) are found in the correspondence. _

volatile keyword

Precisely.volatile does not belong to the category of locks, but behind it lies the basic concept of multithreading, and sometimes people use it to implement custom locks.

cache coherence

realizevolatileFirst of all, it's important to understand.Net/JavaNet borrowed a lot from Java's design philosophy). Net borrowed a lot from Java back in the day), and the Java memory model in turn borrowed from the hardware level.

We know that in modern computers, the processor's instruction speed far exceeds the memory access speed, so modern computer systems have to add a layer of read and write speed as close as possible to the processor's computing speed of the cache to serve as a buffer between the main memory and the processor. Processor calculations directly access the data in the cache, and then synchronized to the main memory after the calculation is complete.

In a multiprocessor system, each processor has its own cache and they share the same main memory.

In contrast, the Java memory model has each thread having its own working memory, which holds copies of the variables being used by the thread. All thread operations on variables must be done in working memory, and cannot read or write directly to variables in main memory. Different threads cannot access each other's variables in working memory directly, and the transfer of values between threads needs to be accomplished through main memory transfers.

Although the two designs are similar, the former mainly addresses the problem of access efficiency mismatch, while the latter mainly addresses the problem in terms of memory safety (contention, leakage). Obviously, this design solution introduces new problems - theCacheCoherence-i.e., each working memory, working memory and main memory, which store the same variables corresponding to values that may not be the same.


To solve this problem, many platforms have a built-in volatile keyword, which modifies a variable in such a way that all threads are guaranteed to get the latest value every time. How does this work? This requires that all threads follow a predefined protocol when accessing the variable, such asMSI, MESI (IllinoisProtocol), MOSI, Synapse, Firefly and DragonProtocoletc., will not be repeated here, just need to know that the system does something extra for us, how much it affects the efficiency of execution.

In addition, volatile prevents the compiler from cleverly rearranging instructions. Most of the time, rearranging instructions is harmless and improves execution efficiency, but there are times when it affects the results and volatile can be used.

Interlocked

volatilevisibilityfunctioning in a similar way.Interlocked Atomic operations can be provided for variables shared by multiple threads. This class is a static class that provides methods for incrementing, decrementing, swapping, and reading values in a thread-safe manner.

Its atomic operations are based on the CPU itself and are non-blocking, so it's not really a lock, but of course it will be much more efficient than a lock.


lock mode

Next, before formally introducing the various locks, it is important to understand the locking modes - locks are divided intokernel mode lockcap (a poem)user mode lockI've got it in the back, too.Mixed Mode Lock

Kernel mode is to let the thread interrupt at the system level, and then cut back to continue working when it receives a signal. This mode is taken care of by the bottom of the system when the thread hangs, and it hardly takes up CPU resources, but it is inefficient when the thread is switched.

The user mode is to keep the thread running until it is available, either by some CPU instructions or by a dead loop. In this mode, the thread hangs and takes up CPU resources all the time, but the thread switching is very fast.

For long locks, kernel mode locks are preferred; user mode locks are useful if there are a large number of locks with very short locking times and frequent switching. AlsoKernel-mode locks enable cross-process synchronization, while user-mode locks enable only intra-process synchronization

In this article, except at the endLightweight synchronization primitivesis a user mode lock, all other locks are kernel mode.

lock Keyword

lock It should be the most common lock operation used by most developers, so I won't go into details here. It should be noted that the lock range should be as small as possible, lock time as short as possible, to avoid unnecessary waiting.

Monitor

The above lock isMonitorsyntactic sugar, compiled by the compiler will generate the code for Monitor, as follows:

lock (syscRoot)
{
    //synchronized region
}
// The lock lock above is equivalent to the following Monitor
(syscRoot); //synchronized region }
monitor (syscRoot); try
{
    //synchronized region
}
finally
{
    (syscRoot).
}

Monitor can also set a timeout to avoid unlimited waiting. It also hasPulse\PulseAll\Wait Implement the wake-up mechanism.

ReaderWriterLock

Very often, the frequency of read operations on a resource is much higher than the frequency of write operations, in which case different locks should be applied to reads and writes, making it easier to read a resource without thewrite lockWhen reading concurrently (pluslocked in (with the door locked from the outside)), and can only be written (with a write lock) when there is no read lock or write lock.ReaderWriterLockThis function is realized.

The main feature is the ability to read concurrently when there is no write lock, as opposed to the generalization that you can only read and write one thread at a time, regardless of the read or write.

MethodImpl()

In case of thread synchronization at the method level, in addition to the above mentionedlock/MonitorIn addition to theMethodImpl()Characterization modifies the target method.

SynchronizationAttribute

ContextBoundObject

realizeSynchronizationAttributeI have to start by saying.ContextBoundObject

The first logical partition in the process that hosts the runtime of the assembly is what we call theAppDomainThe area of the application domain where one or more objects are stored is referred to as theContext

A message receiver exists within the context's interface to detect intercepts and process messages. When an object isMarshalByRefObjectwhen subclassing theCLRwill createTransparent Proxy, which implements conversions between objects and messages. The application domain is the boundary of resources in the CLR. In general, objects in the application domain cannot be accessed by objects in the outside world, but MarshalByRefObject's functionality is to allow access to objects across application domain boundaries in applications that support teleprocessing, using the.NET RemotingA parent class often used in remote object development.

(indicates contrast)ContextBoundObjectFurther, it inherits MarshalByRefObject, so that even if they are in the same application domain, if the two ContextBoundObjects are in different contexts, they will be able to access each other'smethodologiesWhen the time comes, it also lends itself toTransparent Proxyimplementation, i.e. with message-based method invocation. This allows the logic of a ContextBoundObject to always be executed in the context it belongs to.

ps: In contrast, instances of classes that do not inherit from ContextBoundObjec t are treated as instances ofContext-agileThe context-flexible objects can exist in any context. Context-flexible objects are always executed in the caller's context.


A process can contain multiple application domains and multiple threads. Threads can travel through multiple application domains, but they will only be in one application domain at a time. Threads can also travel through multiple contexts to make object calls.

SynchronizationAttributeuse to modifyContextBoundObject, making its interior constitute a synchronization domain that allows only one thread to enter during the same time period.


WaitHandle

When reviewing the source code or interfaces of some asynchronous frameworks, it's common to see that theWaitHandleWaitHandle is an abstract class that has a core methodWaitOne(int millisecondsTimeout, bool exitContext), the second parameter indicates to exit the synchronization domain before waiting. In most cases this parameter is useless, only when using theSynchronizationAttributequalify or modify (grammar)ContextBoundObjectIt is only useful when performing synchronization. It causes the current thread to temporarily exit the synchronization domain so that other threads can enter. See the SynchronizationAttribute section of this article for details.

WaitHandle contains the following derived classes:

  1. ManualResetEvent
  2. AutoResetEvent
  3. CountdownEvent
  4. Mutex
  5. Semaphore

ManualResetEvent

It is possible to block one or more threads until a signal is received telling ManualResetEvent to stop blocking the current thread. Note that all waiting threads will be woken up.

As you can imagine ManualResetEvent this object has a signal state inside to control whether to block the current thread or not, with a signal not to block, without a signal to block. This signal can be set when we initialize the object, such asManualResetEvent event=new ManualResetEvent(false);This indicates that the default property is to block the current thread.

Code example:

ManualResetEvent _manualResetEvent = new ManualResetEvent(false);

private void ThreadMainDo(object sender, RoutedEventArgs e)
{
    Thread t1 = new Thread(this.Thread1Foo);
    (); //Starting a thread1
    Thread t2 = new Thread(this.Thread2Foo);
    (); //Starting a thread2
    (3000); //Sleep the current main thread,invokeThreadMainDothreads
    _manualResetEvent.Set(); //signalized
}

void Thread1Foo()
{
    //Clogging threads1
    _manualResetEvent.WaitOne();
    
    ("t1 end");
}

void Thread2Foo()
{
    //Clogging threads2
    _manualResetEvent.WaitOne();
    
    ("t2 end");
}

AutoResetEvent

The usage is almost the same as ManualResetEvent, so I won't go into it again, but the difference lies in the internal logic.

Unlike ManualResetEvent, when a thread calls the Set method, only one waiting thread is woken up and allowed to continue execution. If there are multiple threads waiting, only one of them will be woken up randomly, and the others will remain in the waiting state.

The other difference, and why the nameAutoThe reason:()will automatically set the signal status to no signal. And once the()triggers the signal, then any thread that then calls the() will not block unless it is preceded by a call to the()Reset to no signal.

CountdownEvent

Its signal has a counting state and can be incrementedAddCount()or decreasingSignal()When it reaches the specified value, it will unlock its waiting thread.

Note: CountdownEvent is a user mode lock.

Mutex

Mutex is a "proprietary" object that allows only one thread to work at a time.

Semaphore

Compare this to a Mutex working with only one thread at a time.Semaphore Allows you to specify the maximum number of threads that can access a resource or resource pool at the same time.


Lightweight synchronization

NET Framework 4, six new data structures are available in the namespace that allow for fine-grained concurrency and parallelization and reduce some of the necessary overhead. They are called Lightweight Synchronization Primitives, and they are all user-mode locks, including:

  • Barrier
  • CountdownEvent (described above)
  • ManualResetEventSlim (lightweight replacement for ManualResetEvent, note that it does not inherit WaitHandle)
  • SemaphoreSlim (Semaphore lightweight alternative)
  • SpinLock (think of it as a lightweight alternative to Monitor)
  • SpinWait

Barrier

When you need a group of tasks to run a sequence of phases in parallel, but each phase can't start until the other tasks have completed the previous phase, you can use theBarrierinstance of the class to synchronize this type of co-working. Of course, we can now also use asynchronousTaskway to accomplish such work more intuitively.

SpinWait

If the time required to wait for a condition to be satisfied is short and an expensive context switch is not desired, then spin-based waiting is a good replacement.SpinWaitNot only does it provide the basic spin function, but it also provides themethod, which can be used to spin until a certain condition is met. In addition SpinWait is aStruct, the overhead is small from a memory perspective.

A word of caution: it is not good practice to spin for long periods of time, as spinning blocks higher-level threads and their associated tasks, as well as blocking the garbage collection mechanism.SpinWait is not designed to be used concurrently by multiple tasks or threadsEach task or thread should therefore use its own instance of SpinWait if necessary.

When a thread spins, it puts a kernel into a busy loop without giving up the remainder of the current processor time slice, and when a task or thread calls themethod, the underlying thread may concede the remainder of the current processor time slice, a large overhead operation.

Therefore, in most cases, do not call a method inside a loop to wait for a specific condition to be met.

SpinLockis a simple encapsulation of SpinWait.


This article was published inTencent Developer CommunitySynchronized Release