Location>code7788 >text

Don't get confused by multithreading anymore! Easy to understand Linux multithreading synchronization in one article!

Popularity:740 ℃/2024-11-06 21:12:20

preamble

Have you ever encountered, code running, threads suddenly grab resources grab crazy? In fact, this is "multi-threaded synchronization" in the blame. Multi-threaded synchronization is a clichéd topic, but every time you really have to deal with it, it still gives you a headache. This article takes you from start to finish to grasp the Linux multi-threaded synchronization, the concepts into plain language, so that you no longer confused, but also can be taken out to pretend to be a pussy! Whether it's "locks", "semaphores", or "conditional variables", we've got it all covered, so bookmark this article and get it all in one place!

First, what is thread synchronization? -- "Line up to operate, follow the rules."

The core of thread synchronization is to control the order of access of multiple threads so that they are orderly and stable in accessing shared resources. You can think of it as everyone lining up to enter the movie theater, each thread is the audience, queuing up in order to enter the venue in an orderly manner. If everyone is swarming, not only is it easy to get into trouble, but also no one can see the movie.

Simply put, thread synchronization is a "queuing tool" that allows threads to manipulate resources in order and according to rules to avoid confusion and errors.

Second, why do we need multi-threaded synchronization? --You don't want people to fight, you need to "get in line".

Simply put, multithreaded synchronization is all aboutControl the order of access between multiple threadsThis ensures data consistency and prevents threads from "fighting".
For example, if you have multiple threads "grabbing" the same variable, they will affect each other at any time, which will eventually lead to the program results in a mess, or even the program crashes. It's like a couple of friends at a table, all trying to get the last piece of meat, but no one can get it, and they even fight! In computers, this scenario can lead to resource conflicts or deadlocks.

Third, thread synchronization of common problems?

Why are multiple threads prone to "fighting"? Because threads are independent units of execution, and their order of execution is uncertain. A few common problems:

  • Contested condition: Multiple threads grabbing the same resource at the same time, resulting in data errors and messing up.
  • Deadlock: Threads wait for each other's resources, no one lets anyone else, and eventually they all get stuck there and don't move.
  • Live locks: Threads keep giving way in order to avoid conflicts, and as a result, no one can continue working and the task remains stalled.

Therefore, in order to ensure the correctness of the program, data consistency, Linux provides a variety of synchronization tools. Can be understood as a "queuing tool", so that the threads come one by one, used up and then go, we coexist peacefully.

Four, synchronization tools set: the whole family

There are seven main types of synchronization tools commonly used in Linux:

  • Mutex: One person at a time. Whoever gets it operates it. Don't grab it!
  • Condition Variable: Someone is in charge of notification, others wait for the signal, and as soon as it's called to start work, it's all hands on deck.
  • Signal (Semaphore): A limited number of slots to control the number of threads accessing the resource at the same time, suitable for multi-threaded flow limiting.
  • Reader-Writer Lock: There is reading and writing; reading can be done with multiple people, writing has to be done on your own.
  • Spin Lock: Constantly checking locks, busy, etc. Good for short time locking scenarios.
  • Barrier: All the threads gather here and wait until they're all together to start the next step.
  • Atomic Operations: Direct operation for small data updates, no locks, fast, suitable for simple counting and flag bit updates.

These tools may seem a bit complicated, but let's take them one at a time and make sure you understand them as soon as you learn them!

V. Mutex lock (Mutex): who gets, who operates first

mutually exclusive lockis the basis for multi-threaded synchronization. As the name suggests, a mutex (mutual exclusion lock) is anExclusive mechanismThe first is that only one thread is allowed to access a shared resource at a time. To understand the role of mutual exclusion locks, you can imagine the "toilet lock" scenario: assume that there is a bathroom in the house, you must lock the door when you enter, and then unlock the door after you finish coming out, in order to prevent others from accidentally entering.

Common Interfaces:

In the POSIX Threading Library, mutex locks are implemented via thepthread_mutex_t type implementation that provides the following common interfaces:

  • pthread_mutex_init(&mutex, nullptr): Initialize mutex locks
  • pthread_mutex_lock(&mutex): lock, if already locked by other threads, block and wait
  • pthread_mutex_trylock(&mutex): Attempts to add a lock, and if the lock is already occupied, returns an error immediately without blocking
  • pthread_mutex_unlock(&mutex): Unlocking, releasing a mutex lock, and allowing other threads to add locks
  • pthread_mutex_destroy(&mutex): Destroy the mutex lock and release the associated resources

Simple code example:

This code shows how to use a mutex to ensure that multiple threads have access to a shared variable.counter of secure access.

#include <>
#include <iostream>

pthread_mutex_t mutex; // Declare Mutually Exclusive Locks

int counter = 0;

void* increment(void* arg) {
    pthread_mutex_lock(&mutex); // lock
    counter++;
    pthread_mutex_unlock(&mutex); // release
    return nullptr;
}

int main() {
    pthread_t t1, t2;
    pthread_mutex_init(&mutex, nullptr); // Initializing Mutual Exclusion Locks

    pthread_create(&t1, nullptr, increment, nullptr);
    pthread_create(&t2, nullptr, increment, nullptr);

    pthread_join(t1, nullptr);
    pthread_join(t2, nullptr);

    std::cout << "Final counter value: " << counter << std::endl;

    pthread_mutex_destroy(&mutex); // Destroying Mutual Exclusion Locks
    return 0;
}

Code Explanation:

increment function: Each thread calls this function on thecounter variable to perform a plus 1 operation. To prevent multiple threads from modifying thecounter, a mutex lock is used:

  • pthread_mutex_lock(&mutex): Locked to ensure that only one thread can modify thecounter
  • counter++: Increasecounter value of
  • pthread_mutex_unlock(&mutex): Unlocked to allow other threads to access

Main function

  • pthread_mutex_init(&mutex, nullptr): Initialize mutex locks
  • Create two threads, t1 and t2, which both execute theincrement function (math.)
  • pthread_join Wait for t1 and t2 to finish
  • printablecounter final value
  • pthread_mutex_destroy(&mutex): Destroy mutex locks and release resources

By adding and unlocking mutual exclusion locks, the code ensures that the two threads do not simultaneously modify thecounterThis ensures data security.

advantages and disadvantages

Pros:

  • Simple and efficient: Mutual exclusion locks are very simple to add and unlock operations, running efficiently, suitable for the need to lock resources for a short period of time occasions.
  • Data security: Mutual exclusion locks can ensure that only one thread accesses the shared resources at the same moment, avoiding data conflicts and ensuring data consistency.
  • Prevent resource contention: Mutual exclusion locks ensure that resources are not accessed by more than one thread at the same time, thus avoiding data errors or program crashes caused by contention.

Drawbacks:

  • Blocking other threads: once a resource is locked, other threads can only wait, which may lead to system inefficiency, especially if the lock is long.
  • There is a risk of deadlock: if two threads wait for each other to release a lock, it can lead to a deadlock. Therefore, you need to be very careful when designing the order in which locks are used.
  • Unsuitable for long locking time: Mutual exclusion locks are suitable for short-term operation, locking time is too long will affect the concurrency of the program, because other threads will be blocked while waiting for the lock, which will reduce the performance of the system.

Application Scenarios:

Mutually exclusive locks are suitable for those who needExclusive resource accessMutual exclusion locks ensure that these operations are not interrupted, and that resources are "locked" during the operation to ensure orderly and secure access. Mutually exclusive locks ensure that these operations will not be interrupted, and resources are "locked" during the operation to ensure orderly and secure access.

Sixth, the condition variable (Condition Variable): there is a signal before the action

Conditional variables are a bit like "waiting for a notification". One thread waits for a signal and the other sends it. For example, a producer and a consumer, the consumer waits until the goods are available to continue; once the producer has the goods, it sends a signal to the consumer, "Come on, come and get it, it's all here!"

Common Interfaces:

In the POSIX Threading Library, condition variables are passed through thepthread_cond_t type implementation, used in conjunction with mutual exclusion locks, common interfaces include the following:

  • pthread_cond_init(&cond, nullptr): Initialize the condition variable.
  • pthread_cond_wait(&cond, &mutex): Wait condition variable. Required to hold a mutual exclusion lock, when the condition is not satisfied automatically release the lock and enter the wait state until receiving a signal or being woken up.
  • pthread_cond_signal(&cond): Sends a signal to wake up a waiting thread. Suitable for notifying a single waiting thread.
  • pthread_cond_broadcast(&cond): Broadcast a signal to wake up all waiting threads.
  • pthread_cond_destroy(&cond): Destroy the condition variable and release the associated resources.

Simple code example:

This code shows how to use a Condition Variable and a Mutex to coordinate the synchronization between two threads. There are two threads in the code, one waiting for a signal and the other sending a signal.

#include <>
#include <iostream>

pthread_mutex_t mutex; // Declare Mutually Exclusive Locks
pthread_cond_t cond; // Declaring Conditional Variables
bool ready = false;

void* waitForSignal(void* arg) {
    pthread_mutex_lock(&mutex);
    while (!ready) {
        pthread_cond_wait(&cond, &mutex); // Waiting for a signal from a condition variable
    }
    std::cout << "Signal received!" << std::endl;
    pthread_mutex_unlock(&mutex);
    return nullptr;
}

void* sendSignal(void* arg) {
    pthread_mutex_lock(&mutex);
    ready = true;
    pthread_cond_signal(&cond); // Send Signal
    pthread_mutex_unlock(&mutex);
    return nullptr;
}

int main() {
    pthread_t t1, t2;
    pthread_mutex_init(&mutex, nullptr); // Initializing Mutual Exclusion Locks
    pthread_cond_init(&cond, nullptr); // Initializing Conditional Variables

    pthread_create(&t1, nullptr, waitForSignal, nullptr);
    pthread_create(&t2, nullptr, sendSignal, nullptr);

    pthread_join(t1, nullptr);
    pthread_join(t2, nullptr);

    pthread_mutex_destroy(&mutex); // Destruction of condition variables
    pthread_cond_destroy(&cond); // Destruction of condition variables
    return 0;
}

Code Explanation:

  • waitForSignal Function: thread that waits for the signal, checks it after adding the lockreadyof the state. If thereadybecause offalseThe thread will callpthread_cond_waitEnter the wait state until you receive thesendSignalThe thread is signaled before execution continues.
  • sendSignal function: the thread that sends the signal, with a lock first, puts thereadyset up astrueand then call thepthread_cond_signalNotify the waiting thread that it can continue. Finally unlock the thread so that thewaitForSignalThe thread continues to execute.
  • main functionmain: Initialize mutex locks and condition variables, create two threadst1respond in singingt2The last task is to wait for the thread to finish and destroy the mutex locks and condition variables.

Pros and Cons:

Pros:

  • Reducing busyness, etc.: Using conditional variables allows a thread to enter a wait state, not consuming CPU resources, waiting for a signal to be reached before continuing execution, improving efficiency.
  • More organized multi-threaded collaboration: Conditional variables make cooperation between threads more orderly and avoid ineffective contention for resources.
  • Supports multi-threaded wakeups: The broadcast function of the condition variable can wake up multiple threads at once, which is ideal for multi-threaded scenarios that require synchronization.

Drawbacks:

  • Requires a mutex lock to work with: Conditional variables cannot be used on their own, they must be used in conjunction with a mutex lock, increasing the complexity of writing them.
  • False wakeup may occurpthread_cond_wait A "false wakeup" may occur, so it is necessary to double-check that the conditions are met in the loop.
  • Increased programming complexity: For the uninitiated, the use of conditional variables with mutex locks can make multithreaded programming more difficult.

Application Scenarios:

Conditional variables apply toproducer-consumer modelScenarios such as these are ideal for situations where a thread needs to wait for another thread to complete some operation, such as waiting for a task to complete, a resource to be released, data to be processed, and so on. With condition variables, a thread can automatically pause while waiting for a condition to be reached, and then resume execution when it receives a signal.

VII. Signal quantity (Semaphore): Whoever comes, whoever gets it, limited quota

A semaphore is like a restrictor at the door. It allows a certain number of threads to enter the "critical area" (shared resource area) at the same time, and threads exceeding this number will have to wait at the door. For example, a limited edition milk tea store can only enter five people at a time, and if you want to drink it, you have to queue up!

Common Interfaces:

In the POSIX threading library, semaphores are passed through thesem_t type implementation, the interface mainly consists of:

  • sem_init(&semaphore, 0, count): Initialize the semaphore.count is the initial value of the semaphore, indicating the number of threads allowed in at the same time.
  • sem_wait(&semaphore): Request a resource. When the signal amount is greater than zero, minus one and enter the critical area; if the signal amount is zero, the thread blocks until other threads release the resource.
  • sem_post(&semaphore): Release resources, increase the value of the signal quantity, and allow other waiting threads to continue.
  • sem_destroy(&semaphore): Destroy the semaphore, freeing up resources.

Simple code example:

The following code shows how a semaphore can be used to control access to a resource by multiple threads. In this example, the initial value of the semaphore is 1, ensuring that only one thread can access the critical zone at a time.

#include <>
#include <>
#include <iostream>

sem_t semaphore;

void* accessResource(void* arg) {
    sem_wait(&semaphore); // Request for resources
    std::cout << "Thread accessing resource!" << std::endl;
    sem_post(&semaphore); // Release of resources
    return nullptr;
}

int main() {
    pthread_t t1, t2;
    sem_init(&semaphore, 0, 1); // Initializing a Signal,permissible1threads accessing resources

    pthread_create(&t1, nullptr, accessResource, nullptr);
    pthread_create(&t2, nullptr, accessResource, nullptr);

    pthread_join(t1, nullptr);
    pthread_join(t2, nullptr);

    sem_destroy(&semaphore); // Destroy the semaphore
    return 0;
}

Code Explanation:

  • sem_wait(&semaphore);: Requesting access to a resource, the signal volume is reduced by one. If the semaphore is zero, the thread will wait.
  • sem_post(&semaphore);: Release the resource, signaling the volume plus one, so that other waiting threads can access it.

In the main function, two threadst1 cap (a poem)t2 will call theaccessResourceThe initial value of the signal quantity is set to The initial value of the semaphore is set to1, ensuring that only one thread accesses the resource at the same moment to avoid conflicts.

advantages and disadvantages

vantage

  • Controlling concurrency: Signalizers allow multiple threads to enter at the same time, and are particularly suited to some applications that allowparallel readingscenarios, such as file reads and writes or database connection pooling.
  • Highly flexible: Signals support not only single-threaded entry, but also multi-threaded entry.

drawbacks

  • Not easy to program and debug: Due to the counter mechanism of semaphore, it is easy to lead to logic confusion, complicated programming and more difficult debugging.
  • Priority cannot be recognized: Signals don't have a built-in priority queue, and some long-waiting threads may "starve to death".

Application Scenarios:

  • current limit: For example, limiting the number of simultaneous connections in a database connection pool and controlling the maximum number of connections through semaphores.
  • read-write separation: Read operations allow multiple threads to work simultaneously, while write operations require exclusive access.
  • Shared resource management: such as resource pools, task queues, etc., there is a fixed capacity of resource pools in which multiple threads are allowed to access, but more than the capacity will have to wait.

Signals are very useful when limiting concurrency, allowing for flexible control of the number of threads, and are particularly suitable for a number ofRead-write separation or flow limiting scenariosIt is a "good helper" in multi-threaded synchronization.

Eight, read and write lock (Reader-Writer Lock): read can be together, write alone!

read-write lockAs the name implies, it is designed to make "reading" operations easier. In a multi-threaded scenario, multiple threads can read a resource at the same time (shared view), but the write operation must be exclusive, to ensure that the contents are not modified by other threads while reading. It's like a library book, everyone can read it together, but if someone wants to change the content of the book, they have to check it out to prevent others from changing the content halfway through the book.

Common Interfaces.

  • pthread_rwlock_init(&rwlock, nullptr)Initialize read/write locks. The read/write lock must be initialized before it can be used, optionally by setting the lock's properties (with thenullptr (indicating the default attribute).
  • pthread_rwlock_rdlock(&rwlock)add a read lock. Multiple threads can hold a read lock at the same time, but if a thread holds a write lock, the calling thread is blocked until the write lock is released.
  • pthread_rwlock_wrlock(&rwlock)capsize. When adding a write lock, the thread needs to have exclusive possession of the read and write locks. While holding the write lock, all other read lock or write lock requests are blocked until the write lock is released.
  • pthread_rwlock_unlock(&rwlock)release. Both read and write locks can be unlocked using this interface. If a read lock is currently held, a read lock is released; if a write lock is held, a write lock is released, allowing other threads to add locks.
  • pthread_rwlock_destroy(&rwlock)Destroy read and write locks. Destroys the read/write lock when it is no longer needed, freeing the associated resources.

Simple code example:

This code demonstrates the basic use of a read/write lock (rwlock) to allow multiple threads to access a shared variable at the same timecounterand secure read and write operations.

#include <>
#include <iostream>

pthread_rwlock_t rwlock; // Declare read/write locks

int counter = 0;

void* readCounter(void* arg) {
    pthread_rwlock_rdlock(&rwlock); // add a read lock
    std::cout << "Counter: " << counter << std::endl;
    pthread_rwlock_unlock(&rwlock); // release
    return nullptr;
}

void* writeCounter(void* arg) {
    pthread_rwlock_wrlock(&rwlock); // capsize
    counter++;
    pthread_rwlock_unlock(&rwlock); // release
    return nullptr;
}

int main() {
    pthread_t t1, t2, t3;
    pthread_rwlock_init(&rwlock, nullptr); // Initialize read/write locks

    pthread_create(&t1, nullptr, readCounter, nullptr);
    pthread_create(&t2, nullptr, writeCounter, nullptr);
    pthread_create(&t3, nullptr, readCounter, nullptr);

    pthread_join(t1, nullptr);
    pthread_join(t2, nullptr);
    pthread_join(t3, nullptr);

    pthread_rwlock_destroy(&rwlock); // Destroy read and write locks
    return 0;
}

Code Explanation:

  • readCounter Function: Acquire Read Lockpthread_rwlock_rdlockReadcounter value and prints it, then releases the read lock. Multiple threads can acquire the read lock at the same time, allowing concurrent reads.
  • writeCounter Function: Acquire Write Lockpthread_rwlock_wrlockIncreasecounter value, and then releases the write lock. The write lock is exclusive; only one thread at a time can write to thecounter
  • main function: three threads are createdt1t2 cap (a poem)t3The two threads perform the read operation (readCounter), a thread performs a write operation (writeCounter). Read/Write Lockrwlock Ensures thread-safety when reading and writing.

advantages and disadvantages

Pros:

  • Efficient read operations: Multiple threads can read resources at the same time without blocking each other, avoiding inefficiencies caused by mutually exclusive locks.
  • Write Operational Security: Exclusive locking for write operations ensures that data will not be faulted due to read/write crossovers.

Drawbacks:

  • It could lead to "writing hunger.": If a thread is reading all the time, the write thread may never be able to acquire the lock, causing the write operation to be delayed.
  • Scenes not suitable for frequent writing: In the case of many write operations, the advantages of read and write locks are not obvious, but instead performance is affected by the lock overhead.

Application Scenarios:

  • Logging and configuration reading: Log contents can be read by multiple threads at the same time, but need to be exclusive when writing logs or updating configurations.
  • caching system: Shared resources such as counters, for example, and read-many-write-few cache operations in multithreaded environments are particularly well suited to read-write locks.
  • Statistical data systems: Read/write locks provide greater read efficiency in statistical systems where data is read frequently and written infrequently.

Nine, spinlock (Spinlock): wait not to spin in place!

Spin lock is a kind of "busy waiting" lock, do not get the lock, it will spin in place, has been "spin" waiting. Spin locks are suitable for short time locking scenarios, a long time will consume CPU, so they are often used for resources with very short wait times. Therefore, spin locks are often used in resource access scenarios with very short wait times.

Common Interfaces.

  • pthread_spin_init(pthread_spinlock_t* lock, int pshared)
    Initialize the spinlock with the parameterpsharedSpecifies whether the spinlock is shared between processes (0 means it is used only within a process). Returns 0 if successful, otherwise returns an error code.
  • pthread_spin_lock(pthread_spinlock_t* lock)
    A locking operation that attempts to acquire a spin lock. If the lock is already occupied, the current thread waits in a loop until the lock is acquired.
  • pthread_spin_unlock(pthread_spinlock_t* lock)
    The unlock operation releases the spinlock so that other threads can continue to try to acquire the lock.
  • pthread_spin_destroy(pthread_spinlock_t* lock)
    Destroys the spinlock and releases the resource. The lock cannot be used again after calling this function unless it is reinitialized.

Simple code example:

The following code shows how to use spinlocks for resource access control between two threads, ensuring that thecounter of safe increments.

#include <>
#include <iostream>

pthread_spinlock_t spinlock; // Declaration of spin locks
int counter = 0;

void* increment(void* arg) {
    pthread_spin_lock(&spinlock); // lock
    counter++;
    pthread_spin_unlock(&spinlock); // release
    return nullptr;
}

int main() {
    pthread_t t1, t2;

    pthread_spin_init(&spinlock, 0); // Initializing a spinlock

    // Create two threads,execute separately increment function (math.)
    pthread_create(&t1, nullptr, increment, nullptr);
    pthread_create(&t2, nullptr, increment, nullptr);

    // Wait for both threads to finish executing
    pthread_join(t1, nullptr);
    pthread_join(t2, nullptr);

    std::cout << "Final counter value: " << counter << std::endl;

    pthread_spin_destroy(&spinlock); // Destruction of spin locks
    return 0;
}

Code Explanation:

increment function:
Each thread calls this function on thecounterPerforms a plus 1 operation. To ensure thread safety, a spinlock is usedspinlock

  • pthread_spin_lock(&spinlock): Locked so that the current thread has exclusive access to thecounter
  • counter++: IncreasecounterThe value of the
  • pthread_spin_unlock(&spinlock): Unlock it so that other threads can access thecounter

main functionmain

  • pthread_create(&t1, nullptr, increment, nullptr) respond in singingpthread_create(&t2, nullptr, increment, nullptr): Create two threadst1cap (a poem)t2The following are some examples of the types of programs that can be implemented.incrementfunction.
  • pthread_join(t1, nullptr) cap (a poem)pthread_join(t2, nullptr): Waitingt1cap (a poem)t2Execution complete.

With spinlocks, this code ensures that two threads do not simultaneously modify thecounter, which ensures data security.

advantages and disadvantages

vantage

  • Reduced context switching: Spin locks don't make threads "block and wait", they make threads "busy and wait" to acquire the lock. This avoids the "sleep-wake" process (i.e., "context switching") and makes the waiting process faster.
  • Ideal for short-term locking: Spin locks are suitable for thoseVery short waiting timeIn this case, the cost of waiting time and "busy waiting" is lower than the overhead of switching contexts.

drawbacks

  • Performance degradation in highly competitive environmentsIf multiple threads are competing for the same lock at the same time, the spin lock's "busyness" will cause a large number of threads to occupy the CPU, ultimately wasting CPU resources and leading to performance degradation.
  • Not suitable for prolonged locking: If the lock is held for a long time, the thread will keep hogging the CPU while waiting, resulting in a waste of resources. Therefore, spin locks are only suitable forVery short lock holding timeThe Scene.

Application Scenarios:

Suitable for short-time, high-frequency locking: on multi-core CPUs.Spin locks are ideal for those "very short locking times but frequent locking" situations.This is a fast operation, with a short lock holding time, so it reduces the context switching overhead of blocking. Such operations are fast and locks are held for a short time, so using spin locks reduces the context switching overhead associated with blocking.

X. Barrier (Barrier): start when you are all here

protective screenThe purpose of this is to allow a group of threads to arrive at a collection point, and then continue together. Think of it as a "collection point" where each thread must wait until all the threads have arrived before they can be "released" together. This is useful when you need tosynchronizationis particularly useful in multithreaded tasks such as parallel data processing: each stage of data processing requires multiple threads to complete, each reaching a specified point before moving on to the next stage together.

Common Interfaces:

In the POSIX thread library, barriers are passed through thepthread_barrier_t type implementation, common interfaces include the following:

  • pthread_barrier_destroy(&barrier): Destroys barriers and frees resources, usually called at the end of the program.
  • pthread_barrier_init(pthread_barrier_t* barrier, const pthread_barrierattr_t* attr, unsigned count) : initialize the barrier.countparameter specifies the number of threads the barrier needs to wait for. Arriving at thecountAfter a few threads, the barrier releases all waiting threads.
  • pthread_barrier_wait(pthread_barrier_t* barrier) : The thread calls this function and enters a wait state until all threads have called this function, and the barrier releases the thread to proceed to the next step.
  • pthread_barrier_destroy(pthread_barrier_t* barrier) : Destroy the barrier and release the associated resources.

Simple code example: barrier synchronization

The following code shows how to use a barrier to keep all three threads synchronized and waiting until all three threads reach the barrier point before continuing execution. This ensures that each thread is synchronized on the same step.

#include <>
#include <iostream>

pthread_barrier_t barrier; // declaratory barrier

void* waitAtBarrier(void* arg) {
    std::cout << "Thread waiting at barrier..." << std::endl;
    pthread_barrier_wait(&barrier); // Waiting for the barrier
    std::cout << "Thread passed the barrier!" << std::endl;
    return nullptr;
}

int main() {
    pthread_t t1, t2, t3;

    // Initializing the barrier,3Threads need to be synchronized
    pthread_barrier_init(&barrier, nullptr, 3);

    // Creating Threads
    pthread_create(&t1, nullptr, waitAtBarrier, nullptr);
    pthread_create(&t2, nullptr, waitAtBarrier, nullptr);
    pthread_create(&t3, nullptr, waitAtBarrier, nullptr);

    // Waiting for the thread to finish
    pthread_join(t1, nullptr);
    pthread_join(t2, nullptr);
    pthread_join(t3, nullptr);

    pthread_barrier_destroy(&barrier); // Destruction barriers
    return 0;
}

Code Explanation:

waitAtBarrier function: each thread is executed in this function, which first prints "Thread waiting at barrier..." to indicate that the barrier has been reached, and then calls thepthread_barrier_wait(&barrier) It waits at the barrier until all threads have arrived, after which it resumes execution and prints "Thread passed the barrier!

main functionmain

  • pthread_barrier_init(&barrier, nullptr, 3);: Initialize the barrier, requiring 3 threads to arrive in sync.
  • Created 3 threads (t1, t2, t3), they both callwaitAtBarrier function.
  • pthread_join Wait for all threads to finish.
  • pthread_barrier_destroy(&barrier);: Destroy barriers and release resources.

The effect of this code is that all 3 threads will wait at the barrier until all of them arrive and then pass through together, ensuring synchronized execution.

advantages and disadvantages

vantage

  • Simplified stage synchronization: Barriers are particularly suitable for phased synchronization in multi-threaded tasks, such as large-scale data processing in batches, where each batch of data is processed and all threads are assembled before moving on to the next phase.
  • simple and easy to use: In scenarios where multiple threads need to be synchronized, barriers provide a simple solution that avoids the complexity of manual counting and locking.

drawbacks

  • inflexibility: The number of threads to be synchronized needs to be specified when the barrier is initialized and cannot be changed dynamically during runtime, which may not be flexible enough in some scenarios where the number of threads changes.
  • squandering of resources: The barrier needs to wait for all threads to arrive before it can continue, and if some threads are executing slowly, it will cause other threads to waste CPU resources while waiting.
  • Easily form deadlocks: If a thread doesn't reach the barrier point, other threads will keep waiting, possibly leading to thread deadlocks throughout the system.

Application Scenarios:

Barriers are used where synchronized phases are required, especially the following:

  • Step-by-step data processing: In data processing, there are some steps that require all threads to be synchronized to complete before moving on to the next step.
  • Synchronization of milestones: For some staged tasks, each step requires multiple threads to work together, such as the synchronization step in parallel computing.
  • multithreaded computing confluence: For example, tasks such as scientific computing, data aggregation, etc. need to be aggregated and summarized at a barrier point after each thread completes part of the task.

XI, atomic operations (Atomic Operations): small block update, fast and accurate

atomic operationIt is a "small and fast" multi-threaded operation. It directly updates the data "exclusively", and the operation is indivisible.No locks required, because its operations are atomic: all or nothing. It is suitable for use inFast update of small data, such as counting, flag bits, and other scenarios. Using atomic operations in a multithreaded environment avoids the performance overhead of locking and is therefore very efficient when updating simple shared resources.

Common Interfaces:

In the standard library of C++, the interfaces for atomic operations are very simple and the following are commonly used:

  • std::atomic<T>
    Declare a variable of atomic typeT, often used for simple data types such asintbooletc.std::atomic<int> counter(0);Represents an integer atomic variablecounter, the initial value is 0.
  • fetch_add() respond in singingfetch_sub()
    are used for atomic addition and atomic subtraction operations, respectively, for examplecounter.fetch_add(1);will safely add 1 while returning the old value.
  • load() cap (a poem)store()
    load()for reading variable values atomically.store()Used to store values atomically and ensure data consistency.

Simple Code Example: Atomic Operation Counter Implementation

The following code shows how to use thestd::atomic to securely share datacounter Incremental operations. There is no need for locks here, atomic operations are automatically thread-safe.

#include <atomic>
#include <>
#include <iostream>

std::atomic<int> counter(0); // Using Atomic Types

void* increment(void* arg) {
    for (int i = 0; i < 100000; ++i) {
        counter++; // atomic operation,Automatic thread safety
    }
    return nullptr;
}

int main() {
    pthread_t t1, t2;

    // Create two threads
    pthread_create(&t1, nullptr, increment, nullptr);
    pthread_create(&t2, nullptr, increment, nullptr);

    // Wait for both threads to finish executing
    pthread_join(t1, nullptr);
    pthread_join(t2, nullptr);

    std::cout << "Final counter value: " << counter << std::endl;
    return 0;
}

Code Explanation:

  • std::atomic<int> counter(0);: Use of atomic typesstd::atomic statement countercounter. All pairs ofcounter operations are thread-safe.
  • counter++: Atomic incremental operations, no locking required, data consistency guaranteed even in multi-threaded environments.

Through atomic operations, we avoid the performance overhead of locking, the code is concise and efficient, especially suitable for frequent updates to small data.

advantages and disadvantages

vantage

  • No locks required: Atomic operations are naturally thread-safe and do not require additional locking mechanisms.
  • high performance: Atomic operations reduce locking overhead and provide higher performance, and are particularly well suited for small update operations.
  • simple code: Usestd::atomic Shared data can be updated directly with cleaner code.

drawbacks

  • Suitable for simple data only: Atomic operations are suitable for single operations on small data and cannot be used for complex data structures or multi-step operations.
  • Complex synchronization is not supported: Atomic operations are only suitable for simple synchronization needs such as counting, flag bits, etc. and cannot handle complex concurrency control.
  • May affect readability: If one is not familiar with the semantics of atomic operations, the code may be less readable.

Application Scenarios:

Atomic manipulation is well suited for several situations:

  • register: Incrementing and decrementing counters is very efficient in multi-threaded environments, such as counting tasks in a thread pool.
  • Flag Bit Updates: Updates status flags in a multithreaded task, such as whether the task is complete, whether resources are available, and so on.
  • Fast counting statistics: In situations where frequent updates are required, atomic operations can avoid the performance overhead of locks and increase the speed of statistics.

Summary:

Today we explored the multi-threaded synchronization approach in Linux together, starting with themutually exclusive lockuntil (a time)conditional variableand then toSignals, read and write locks, and spin locks, as well as barriers and atomic operations., unlocking the application scenarios and advantages and disadvantages of each synchronization method one by one. After learning these techniques, writing multithreaded programs will no longer be a headache!synchronizationIt's not a mystery. With these basic tools, you too can write smooth and safe multithreaded programs.

If you find it helpful, don't forget to like and share, follow me and we'll learn more interesting programming together! Already mastered these synchronization methods? Then congratulations! If you haven't quite figured it out yet, that's okay, feel free to leave a comment in the comments section and we'll discuss it together to make sure you get it all!

Want to find me faster? Just tweet me.Learn Programming with Xiaokang", or scan the QR code below to follow and grow with a group of programming buddies who love to learn!

What can I learn by paying attention?

  • Here we share Linux C, C++, Go development, computer basics and programming interviews, etc. The content is in-depth and easy to understand, so that technical learning becomes easy and interesting.

  • Whether you are preparing for an interview or want to improve your programming skills, this place is dedicated to providing practical, interesting and insightful technology sharing. Come and follow us, let's grow together!

How do I follow my public number?

Very simple! Scan the QR code below to follow with one click.

In addition, Xiaokang has recently created a technical exchange group dedicated to discussing technical issues and answering readers' questions. When reading the article, if there is any knowledge point that you do not understand, you are welcome to join the exchange group to ask questions. I will try my best to answer for everyone. I look forward to making progress with you all!