Location>code7788 >text

Interview question: how to solve cache and database consistency problems?

Popularity:492 ℃/2024-07-23 17:54:52

The so-called consistency problem means that in the case of using both the cache and the database at the same time, it is important to make sure that the data update operations in the cache and the database are kept synchronized. That is, when making changes to the data, whether the cache or the database is modified first, it is ultimately necessary to ensure that the data in both is the same, and there will not be a problem that the data is not the same.

1. Consistency solutions

There are two classic solutions for caching and database consistency:

  1. utilizationDelayed double deletion + MQ Ensure data consistency.
  2. Listening to MySQL Binlog Writes via CanalThe program then sends the information about the write operations (additions, deletions, and modifications) to Kafka, and the program listens to the messages from Kafka and updates Redis, thus ensuring the ultimate consistency of the data.

Note that regardless of whether delayed double deletion or Canal is used, transient data inconsistencies can occur, but final data consistency can be guaranteed.

However, when using a delayed double-deletion + MQ approach, there is a tricky issue that is difficult to deal with, and that is how to set the delay time?

If the delay time is set to a shorter period, then there will be data inconsistencies in concurrent scenarios; if the delay time is set to a longer period, then there will also be data inconsistencies over a longer period of time. The problem comes down to the fact that the scheduling time of concurrent threads cannot be artificially controlled (by the operating system).

So for the above reasons, using Canal to ensure data consistency has turned out to be a good and effective means of solving the problem of data consistency between Redis and MySQL.

Implementation process

The implementation process of ensuring data consistency through Canal is shown in the figure below:

workflow

To read the MySQL Binlog configuration using Canal, proceed as follows:

  1. Enable and configure MySQL's Binlog settings.
  2. Restart the MySQL service.
  3. Configure MySQL with a canal/canal user for subsequent Canal synchronization of data (this step is not required).
  4. Install and unzip Canal.
  5. Modify the configuration file to synchronize data to Kafka and configure Kafka server information.
  6. Modify the example/ configuration file to specify that you want to synchronize data to a Kafka topic.
  7. Copy the jar package from the plugin to the lib directory to allow Canal to support synchronization of data to MQ.
  8. Start the Canal service.
  9. Listen to Kafka topics in your code and determine and update the Redis cache.

The format of data stored in Kafka is as follows:

4. Final

Theory is like a bright light to illuminate the road ahead, while practice is our solid footsteps. Only by closely combining the two, practicing and practicing, can we move forward steadily in our journey of knowledge and gain real growth and progress. So, take action together, just watching without practicing is all a sham.

This article has been included in my interview mini-site, which contains modules such as Redis, JVM, Concurrency, Concurrency, MySQL, Spring, Spring MVC, Spring Boot, Spring Cloud, MyBatis, Design Patterns, Message Queuing and more.