In the current workplace environment, the996 The work system has always been a controversial topic。
"996" is a synonym for a work system that starts at 9:00 a.m. and ends at 9:00 p.m. on weekdays, with an hour (or less) off at noon and in the evening for a total of 10 hours or more, and works six days a week.
Some people are interested in theiranathema, believing that it seriously affects the quality of life; while there are others who, in some casesacceptableThis is the mode of operation. So, under what circumstances exactly will people be willing to accept 996?
Recently on a platform, the number one hot list was this topic, which attracted so many discussions. In this topic.Some say the program is at a critical junctureIf you have to work overtime to catch up with the schedule, you are willing to accept 996 to ensure that the project can be delivered. Just like the back-end development, encountered urgent on-line or major system optimization, in order to system stability, good performance, may have to spend more time and energy.
Others feel that if the company offers enough rewards and opportunities for advancement., 996 is also acceptable. For example, in some technical fields, participating in challenging projects can be beneficial to career development as valuable experience can be gained and skills can be upgraded.
What about you? When will you accept 996? Was it for a career goal or something else?
Well, that's all there is to discuss on this topic.Today, I'm going to update the latest face-to-face interviews submitted by fans, and at the request of fans, I've organized the questions and answers one by one:
The content is organized as follows (deleted content related to the project):
Have you ever encountered a kafka message backlog or message loss problem, tell me how to deal with it?
information backlog
- Sometimes on the wire, a large backlog of unconsumed messages may accumulate on the broker because the sender sends messages too fast or the consumer processes messages too slowly.
- prescription: In this case, if there is a backlog of millions of unconsumed messages that need to be handled urgently, you can modify the consumer program to quickly forward the received messages to other topics (you can set up a lot of partitions), and then start multiple consumers to consume different partitions of the new topic at the same time. As shown in the figure:
- Consumers have been unsuccessful in consuming messages due to changes in message data format or bugs in the consumer program, which can also lead to a large backlog of unconsumed messages in the broker.
- prescription: This situation can be unsuccessful consumption of these messages forwarded to other queues (similar to the dead letter queue), and then slowly analyze the dead letter queue message processing issues. This dead letter queue, kafka does not provide, the need to integrate third-party plug-ins!
Missing message
kafka can lose messages both when sending messages on the production side and when consuming them on the consumer side.
①: Producer message lost
- When a producer sends a message, there is an ack mechanism, when ack=0 or ack=1, both may lose the message. As shown below:
- acks=0
Indicates that the producer does not need to wait for any broker to acknowledge receipt of the message reply, it can continue to send the next message. The highest performance, but most likely to lose the message. Big data statistical reporting scenarios, the performance requirements are very high, the data loss is not sensitive to the situation can use this.
- acks=1
At least wait until the LEADER has successfully written the data to the local log, but not until all FOLLOWERS have done so. It can then continue to send the next message. In this case, if the follower does not successfully back up the data and the leader hangs up at that time, the message will be lost.
- acks=-1 or all
This means that the LEADER needs to wait for all backups (the configured number of backups) to be successfully written to the logs, and this strategy will guarantee that no data will be lost as long as one backup survives. This is the strongest data guarantee. Generally, unless the financial level, or with the money to deal with the scenario will use this configuration. Of course, if the configuration is 1, you may also lose messages, similar to the case of acks=1.
②: Loss of messages on the consumer side
- The consumer side of the loss of the message is mainly reflected in the consumer side of the offset of the automatic submission, if you open the automatic submission, in case of consumption to the data has not been processed, at this time you consumer direct downtime, the unprocessed data Lost, the next time you can not consume, because the offset has been submitted to the end of the next time from the offset out of the beginning of the consumption of new messages.
method settle an issueIt is the manual submission that is used on the consumer side.
Underlying structure of slice, map, channel?
Slicing:
type slice struct {
array // point to the underlying array
len int // length
cap int // capacity
}
map:
type hmap struct {
// Note: the format of the hmap is also encoded in cmd/compile/internal/reflectdata/.
// Make sure this stays in sync with the compiler's definition.
count int // # live cells == size of map. Must be first (used by len() builtin)
flags uint8
B uint8 // log_2 of # of buckets (can hold up to loadFactor * 2^B items)
noverflow uint16 // approximate number of overflow buckets; see incrnoverflow for details
hash0 uint32 // hash seed
buckets // array of 2^B Buckets. may be nil if count==0.
oldbuckets // previous bucket array of half the size, non-nil only when growing
nevacuate uintptr // progress counter for evacuation (buckets less than this have been evacuated)
extra *mapextra // optional fields
}
channel:
type hchan struct {
qcount uint // total data in the queue
dataqsiz uint // size of the circular queue
buf // points to an array of dataqsiz elements
elemsize uint16
closed uint32
elemtype *_type // element type
sendx uint // send index
recvx uint // receive index
recvq waitq // list of recv waiters
sendq waitq // list of send waiters
// lock protects all fields in hchan, as well as several
// fields in sudogs blocked on this channel.
//
// Do not change another G's status while holding this lock
// (in particular, do not ready a G), as this can deadlock
// with stack shrinking.
lock mutex
}
Can a slice be used as a key in a map structure?
The answer is obviously no, because slice can't be compared using "==", so it can't be used as the key of a map.
And the official documentation explains/blog/maps
As mentioned earlier, map keys may be of any type that is comparable. The language spec defines this precisely, but in short, comparable types are boolean, numeric, string, pointer, channel, and interface types, and structs or arrays that contain only those types. Notably absent from the list are slices, maps, and functions; these types cannot be compared using ==, and may not be used as map keys.
What is the order of execution of a code snippet when it has more than one defer inside it?
In Go, if a function has more than one defer statement, they are executed in Last In First Out (LIFO) order. That is, the last registered deferred function is executed first, and the first registered deferred function is executed last.
In what cases can it go and modify the return value? Under what circumstances can a
The defer is executed after the return, but before the function exits, the defer can modify the return value. Here is an example:
func test() int {
i := 0
defer func() {
("defer1")
}()
defer func() {
i += 1
("defer2")
}()
return i
}
func main() {
("return", test())
}
// defer2
// defer1
// return 0
In the above example, the return value of test is not modified. This is due to Go's return mechanism; after executing the Return statement, Go creates a temporary variable to hold the return value. If it is a named return (i.e., it specifies the return value), then the return value is not modified.func test() (i int)
)
func test() (i int) {
i = 0
defer func() {
i += 1
("defer2")
}()
return i
}
func main() {
("return", test())
}
// defer2
// return 1
In this example, the return value has been modified. For functions that have a named return value, the return statement does not create a temporary variable to save it, so the fact that the defer statement modifies i has an effect on the return value.
map Is it thread-safe?
In Go, the built-in map is not thread-safe.
Simultaneous read and write operations on a common map in multiple threads can lead to unpredictable results, such as data inconsistencies, program crashes, etc.
For example, if one thread is reading data from a map and another thread is deleting or adding elements, a problem may occur.
If you need to use a map safely in a multithreaded environment, you can use some synchronization mechanisms, such as locking protection, or a third-party thread-safe map implementation.
Redis implementation of a real-time updated leaderboard
In Redis, you can use theSorted Set
(ordered sets) to implement real-time updated leaderboards.
Sorted Set
Each element in the program is associated with a score, and the elements are sorted by the score.
Here is a basic idea of how to implement it:
- insert data
- When there is new data to be added to the leaderboard, the data is inserted as an element, with the associated value (e.g., score) as the score, into the
Sorted Set
Center.
- When there is new data to be added to the leaderboard, the data is inserted as an element, with the associated value (e.g., score) as the score, into the
- Updated data
- If you need to update the score of an element (e.g. the user's score has changed), you can use the
ZINCRBY
command to increase or decrease an element's score, allowing for real-time updates.
- If you need to update the score of an element (e.g. the user's score has changed), you can use the
- Get the Leaderboard
- utilization
ZRANGE
command to get the top of the leaderboard. - For example.
ZRANGE sorted_set 0 9
You can get the top 10 ranked data.
- utilization
- Inverted Ranking
- If you need to get the leaderboard in reverse order, use the
ZREVRANGE
Command.
- If you need to get the leaderboard in reverse order, use the
What determines the node height of a jump table?
In Redis, the height of a jump table node is randomly generated according to the power law, with a value between 1 and 32.
Each time a new jump table node is created, Redis uses a randomized algorithm to generate a value between 1 and 32 as the size of the level array, which is the "height" of the node, i.e., the number of levels.
This randomized determination of node heights is somewhat probabilistic. By using the idempotent law, the probability of occurrence of higher tiers is made relatively low, thus avoiding excessive extra memory overhead while maintaining the efficiency of the lookup. This allows logarithmic time complexity lookup, insertion and deletion operations to be realized in the average case without excessive space wastage due to a fixed multi-layer structure.
The advantage of this design is that it does not need to strictly maintain the relationship between the number of nodes between two neighboring layers of the chain table, the newly inserted node can decide in which layers of the chain table to appear according to its own randomly generated layers. It can not only improve the efficiency of the search, but also can better cope with the dynamic insertion and deletion operations, avoiding the complex adjustment operation, so that the time complexity will not degrade to O(n).
databases and tables
Split database and split table is a database architecture optimization technique, mainly to solve the performance bottlenecks and management challenges that occur when the database has too much data and too many concurrent accesses.
branch libraryIt refers to splitting a large database into multiple independent databases according to business or other rules, and each library can be deployed on different servers to reduce the load of a single database and improve data processing capacity.
spreadsheetIt is to split a large data table into multiple smaller tables according to specific rules, such as by time, geography, business type, etc. The table can be split horizontally, i.e., the data in the table is split according to rows and distributed to multiple tables; or vertically, i.e., the columns in the table are split into different tables according to business relevance.
There are several main ways to realize split library and split table:
- Hash-Based Separation of Libraries and Tables: By performing a hash calculation on a field of the data, the data is allocated to different library tables based on the hash value.
- Scope-based library and table: according to the range of values of data fields, such as according to the time interval, the size range of the value, etc. for the library and table.
- Enumeration value based sub-database and sub-table: Sub-database and sub-table based on enumeration values of data fields, such as region, business type, etc.
The benefits of splitting libraries and tables include:
- Improve the performance and responsiveness of the system and reduce the time spent on data queries and operations.
- Enhance the scalability of the system to easily cope with growing data volumes and business needs.
- Facilitates data management and maintenance and reduces the complexity of individual libraries or tables.
Welcome ❤
We got one.Free group for sharing interview questions, intercommunicate and brush up on your progress together.
Maybe it will enable you to brush up on the latest interview questions for your intended company.
Interested friends can add my wechat:wangzhongyang1993, Remarks: csdn interview group.