Location>code7788 >text

Network Fundamentals - Transport Layer

Popularity:684 ℃/2024-08-14 17:44:44

Network Fundamentals (II)

1. Application layer

1.1 Protocol customization

It is known that there are protocols for each layer, so the protocols for the application layer are customized by the programmer by theA mutually agreed upon agreement between the two ends.

eg :

The sender has to send the structure of the personal information, in TCP, the network transmission is in the form of byte stream, can not pass the structure, so it is generally converted into a string form of all the data to be transmitted, and provides for the division of symbols, boundaries, the length of the operation to prevent the over-reading or under-reading

The process of passing out a specific object or structure as a stream of bytes is called serialization.

The process of converting a stream of bytes retrieved from the network into a specific object or structure is called deserialization.

But, in fact, we generally don't need to write serialization and deserialization by hand, there are now mature techniques

  1. json
  2. protobuf
  3. xml

When reading, you need to ensure that each read is a complete message according to the protocol, such as reading n delimiters at a time, or reading just the length of a message at a time to meet the conditions of a message

1.2 HTTP protocol

HTTP protocol is a protocol in the application layer, commonly used with the browser, but now basically has been eliminated, because his security is not very high, basically replaced by https, but the usage is almost the same, so you need to understand http first!

Recognizing url

When accessing a website, the domain name is actually an ip address, this process is resolved by the browser to visit the site content, in fact, is to access each other's web servers in the resources (pictures video are resources), then it is the equivalent of accessing the server in a specific path under the file, the specific click to access which file is written by the front-end to write, after writing and then deployed on the server can be

urlencode and urldecode

Some special symbols\,?Special symbols, such as the url, exist in the url, so if they appear then something will go wrong, so a specific algorithm needs to be used to convert it to a different format

  • urlencode escaping rules

Convert the character to be transcoded to hexadecimal, and then from right to left, take 4 bits (less than 4 bits are handled directly), every 2 bits to do one bit, preceded by %, encoded as %XY format

  • urldecode

urldecode is the inverse of urlencode.

HTTP protocol format

A GET request is a request made by a browser in the HTTP protocol

requesting

The first line is the request line, which is Request method url http version ending with \r\n

After the n line request header, the content is the browser initiated, so generally do not need us to change, etc., each line ends with \r\n

Empty lines \r\n split the content before the body, after that is the body of the text

//Request format
GET / HTTP/1.1
Accept:image//jpeg,*/*
Accept-Language:zh-cn
Connection:Keep-Alive
Host:localhost
User-Agent:Mozila/4.0(compatible;MSIE5.01;Window NT5.0)
Accept-Encoding:gzip,deflate

username=jinqiao&password=1234

responsive

The first line is the status line http version status code status code description \r\n

The status code is generally returned to the client, on behalf of the results of how, 200 represents the correct, common and 404, on behalf of the page does not exist

An article to memorize HTTP status codes (Illustrated HTTP Status Codes)

The n lines that follow are response headers, which can specify the attributes of the format of the returned resource and other attributes ending in \r\n per line

Empty lines \r\n split the content before the body, after that is the body of the text

//response format
HTTP/1.1 200 OK
Server:Apache Tomcat/5.0.12
Date:Mon,6Oct2003 13:23:42 GMT
Content-Length:112
Content-Type:text/html
Last-Moified:Mon,6 Oct 2003 13:23:42 GMT
Content-Length:112

<html>
<head>
<title>HTTPResponse Example<title>
</head>
<body>
Hello HTTP!
</body>
</html>

When requesting, generally HTTP will request the url to be only one/, so it can be judged on the server side, assuming the url is/Then the homepage can be accessed, and the file can be set to wwwroot/, and then the file can be read and put into the body of the response, which is sent to the browser

HTTP content-type

  • Generally refers to the Content-Type present in a web page, which is used to define the type of web file and the encoding of the web page, determining what form and encoding the browser will read the file in

  • response, the file types are different, so the response header needs to specify this type conten-type, you can design a hash table with access to the file suffix to correspond to the type of

HTTP Common Header

Content-Type: data type (text/html, etc.)

Content-Length: Length of the Body

Host: the client informs the server that the requested resource is on which port of which host.

User-Agent: declares the user's operating system and browser version; referer: the page from which the current page has been redirected; and

location: used with the 3xx status code, tells the client where to visit next; the

Cookie: Used to store a small amount of information on the client side. Often used to realize the function of session

POST method, cookise and sesson id

2. Transport layer

2.1 Port number

Port number (port) is used to identify a host for communication applications

Why do you need a port number? Can't you use a pid or something?

Each time the program is restarted the pid is reassigned so the concept of a port number is introduced, writing to or reading from the port number is reading or writing to the process

I. Facilitating the location of destination processes

II. With decoupling the network from the operating system

Relationship between port number and process

A port number can be bound to only one process, but a process can be bound to multiple port numbers

In the TCP/ip protocol, a quintuple is used with thesource ipsource port numberdestination ipdestination port numberprotocol numberto label a communication

In the application layer, in fact, there is a port number can find the corresponding application layer protocol, because the application layer protocol is written in the program, so the data will be sent to the process through the port number, is not the equivalent of implicitly find each other's application layer protocols

Port number range division

  • 0 - 1023: Well-known port numbers, HTTP, FTP, SSH and other widely used application layer protocols, their port numbers are fixed.

  • 1024 - 65535: Port numbers dynamically assigned by the operating system. The port number of the client program is assigned by the operating system from this range.

2.2 UDP protocol

UDP: User Datagram Protocol

UDP datagram format

UDP stores the source and destination port numbers

The 16-bit UDP length indicates the maximum length of the entire datagram (UDP header + UDP data), up to 64KB = 2^16B.

UDP Features

Connectionless : Only need to know the destination ip and port to send the message.

Unreliable : Retransmission mechanism is not used and there is no way to confirm if the other party has received the message.

Datagram oriented : No flexibility to control the number and quantity of reads.

datagram-oriented

UDP transmission is datagram-oriented, i.e., it sends as much as it can at one time, without splitting or merging.

Buffers for UDP

  • UDP has no transmit bufferIf you want to send the data directly to the operating system kernel by calling send or other interfaces, then the operating system encapsulates the headers and sends them to the network layer for subsequent actions.
  • UDP has an acceptance bufferThe order in which UDP accepts the buffer is not necessarily the order in which it is sent, and when the buffer is full, then the data will be discarded in the future.

UDP sockets can read as well as write, a concept calledfull duplex

UDP sends up to 64K of data at a time.,In today's network, it is very difficult to meet this, so either means to send datagrams separately and then manually splice them, but this is too cumbersome, so UDP does not apply to most scenarios.

UDP-based application layer protocol

NFS: Network File System

TFTP: Simple File Transfer Protocol

DHCP: Dynamic Host Configuration Protocol

BOOTP: Boot Protocol (for diskless device booting)

DNS: Domain Name Resolution Protocol

and the UDP application layer protocol written by myself

2.3 TCP Protocol

Full name of TCP: Transmission Control Protocol

TCP transport process

The essence of transmitting data is actually copying, copying data from the application layer into the transport layer buffer, then the other party receives a copy from the network into the receive buffer, and then the application layer reads the copy into the application layer's buffer

TCP is full duplex.

TCP protocol segment format

16-bit source port: stores the source port number

16-bit destination port: stores the destination port

32-bit serial number: stores the serial number of the current data segment

32-bit Acknowledgement Serial Number: the serial number of the acknowledgement response returned to the other party.

4-bit header length: indicates how many bits in the header, on behalf of how many 4 bytes, 4 bytes is the unit, because the option is not sure if there is, so the 4-bit header length on behalf of how long, eg: the above if the option does not have, then there should be 5 4 bytes, because the header length of 5, each line is 4 bytes, so the 4-bit header length should be: 20bit = 4 bytes * 5 rows.

6 flag bits: more on this later

16-bit window size: flags the length of the entire message

16-bit emergency pointer: points to the location of the emergency data

6 flag bits:

  • URG : Urgent flag bit, the message processing time is in accordance with the queue of the way successive processing, when the received message the flag bit is 1, on behalf of the current matter is urgent, prioritized to the forefront to resolve the problem

  • ACK: Acknowledgement flag bit, when you receive the flag bit of the message, on behalf of the other party to send a message to confirm the answer, in fact, a valid reply to the data segment is also considered to be a confirmation of the answer, most of the communication time to reply to the answer to the ACK basically exists!

  • PSH: Prompts the receiver's buffer to read the data away in a hurry

  • RST: Sometimes when the connection is disconnected unexpectedly and the other party is detected to be disconnected and asked to reconnect, this flag bit is given.

  • SYN: Three handshakes when establishing a link, need to give this flag bit

  • FIN: To break the link, give this flag bit on four swings

Acknowledgement of the response mechanism

Acknowledgement can also be a valid data answer, or just a header-only data segment, are considered to be an acknowledgement, because as long as you receive the other side of the message, it can be proved that the last message I sent you must have received it.Acknowledged responses are not necessarily acknowledged data segments; sometimes normal data segments also count as acknowledged responses.

Only when you receive an acknowledgement message from the other party, you can be 100% sure that the last message was received by the other party.

So there will always be a last message that is not guaranteed to be received by the other party, because the other party will not reply to the last message sent

The above process is serial, but what is actually true is that it is parallel

byte numbering

TCP will be each byte are numbered, sent to the other side of my serial number is the last byte of the subscript position, the other side to send over the confirmation of the serial number is you send +1, meaning you should start from which serial number!

timeout retransmission mechanism

There are two possibilities for a timeout retransmission: the sender loses the packet, or the responder loses the packet when it comes back, both of which can cause a timeout retransmission.

  • Scenario 1: Packet loss on the sender side

  • Scenario 2: Packet loss on the responding side

Regardless of which side loses the packet, it will cause a timeout retransmission mechanism

When the sender sends a data segment to the other party, if the other party delayed response, then it is possible that the data sent is lost, at this time then the data will be re-sent to determine whether the loss of packets are

Judge whether to timeout retransmission is based on the passage of time, assuming that the other party did not respond for a long time, then it is possible that the packet is lost, at this time it will ring the need to retransmit, but the judgment of the time is unstable

  • It takes a long time, resulting in a lot of wasted waiting time.

  • A shorter period of time will result in frequent re-sending to the other party within a certain period of time.

TCP dynamically calculates this maximum timeout in order to ensure high performance in all environments.

In Linux (and BSD Unix and Windows as well), timeouts are controlled in units of 500ms, and the timeout for each timeout retransmission is an integer multiple of 500ms.

TCP connection management mechanism (three handshakes, four waves)

img
  • handshake

Why three handshakes? Once, twice, three times, four times ...... Is that okay?

  • Assuming that it is a one-time wave, then the client only needs to initiate a SYN to establish a connection with the server, then assume that at this point the client hits a dead end frantically sending a SYN request to the server, then it's a malicious attack to establish a connection to the server to consume memory as well, so one time is not good enough, this kind of attack is known as theSYN flood attack

  • Two handshakes the same as one can also launch a SYN flood attack

  • So why does three times work?

    • Three handshakes are the least costly means of verifying full duplex, as paraphrased below

    • Three handshakes are effective in preventing standalone attacks on servers

    Because once the three handshakes are successful, then Client also establishes the number connection on its own host, then it will also consume memory overhead, then if it keeps establishing connections, the memory of a single host will certainly not be able to consume more than the capacity of a cluster of servers.

  • The last of the four times is sent by the server, and there is no way for the server to be able to confirm that the client has received it, so four times is no good, and it can be said that all the even times are no good

  • An odd number of handshakes greater than 3 is fine, but since 3 is fine, anything more is a waste of system resources and memory

The above has described why there are three handshakes also equates to why there are three handshakes, and three handshakes are required to establish a connection

Three handshakes are not necessarily sure of success, the biggest worry is that the last ACk is lost, if it is lost, even if the establishment fails, but there is a timeout retransmission mechanism.

Connections are definitely being managed, the server side then has a large number of connections, then they have to be managed, then it's a matter of describing and then organizing them.

  • Four waves.

Client and server are on equal footing, and disconnecting is a mutual thing that both parties have to agree to, so four waves have to be initiated by more than one party.

The following example explains that the client side is compared to the initiator and the server side is compared to the receiver, and the reverse is the same phenomenon

When the client's file descriptor is turned off, the client initiates a disconnection request and sends a data segment with the FIN flag bit, and the client's status changes to FIN_WAIT_1 after sending the FIN data segment.

② The server receives the disconnect request from the client, then it returns the client to the ACK, the server sends the state to CLOSE_WAIT state, the client receives the ACK state to FIN_WAIT_2 state

③ Normally, after the client disconnects from the fd, the server reads the data and closes its corresponding socket. Only when the corresponding socket of the server is closed will it launch a data segment with FIN to the client, and when the data segment is sent out, the state of the server will change to LAST_ACK.

④When the FIN is received from the server, the client will send the last ACK, after sending this ACK, the client will enter the TIME_WAIT state, and the server will enter the CLOSED state after receiving this ACK.


  • Sometimes ② and ③ have the probability of being sent to the client together, and sometimes there are three waves of the hand
  • If the server does not close the corresponding socket, then it will be stuck in the CLOSE_WAIT state (a loophole caused when writing the program), then the connection will not be disconnected, taking up resources.

TIME_WAIT Status Explained

When the client initiates the last ACK request, why is the state entered not CLOSED but TIME_WAIT?

  • Reason 1: To ensure that the last ACK sent is received by the server and that no packets are lost

    Because the first FIN and the second FIN and the first ACK are not afraid of being lost, because a timeout retransmission occurs if they are lost, there is no need to worry about packet loss for the first three dataIf the last ACK packet loss occurred, because the subsequent will not be answered, so there is only the first case of packet loss, the pressure is not sent to the server, at this time, if the server does not receive the ACK for a long time, then it will be timeout retransmission, and once again to the client to send FIN waiting for an answer ACK

  • Reason 2: When both parties disconnect, there are still stranded messages in the network to ensure that the stranded messages are dissipated.


  • How long TIME_WAIT lasts and the impact it causes
  • The client has to enter the TIME-WAIT state after sending the last ACK, the length of this state is 2 times MSLMSL it is the longest maximum time for any message to exist on the network, beyond this time the message will be discarded.

  • This time is not final according to the 2MSL, each system kernel has a configured time, you can through thecat /proc/sys/net/ipv4/tcp_fin_timeout Check the value of msl

  • We terminate the server with Ctrl-C, so the server is the one that actively closes the connection, and still can't listen to the same server port again during TIME_WAIT.


  • Solution for bind failures caused by TIME_WAIT state

When the server-side process has a connection to actively exit, the connection is not disconnected, at this time and then reconnect is not allowed, this is not reasonable, because the assumption is that more than one customer, if a long time before you can restart, then it will cause a lot of loss and loss, so we can run manually to create the socket again

To complete the operation before binding the listenfd

int opt=1;
setsockopt(listenfd,SOL_SOCKET,so_REUSEADDR,&opt,sizeof(opt));
//Use setsockopt() to set the option SO_REUSEADDR of the socket descriptor to 1, //indicating that the creation of ports with the same IP number is allowed.
//Indicates that multiple socket descriptors with the same port number but different IP addresses are allowed to be created.

sliding window

TCP's transmit buffer at the kernel level can be divided by region

So it can be divided into three areas, at first it's all unsent data, and the sliding window slides to the right

The window size is the maximum amount of data that can be sent without waiting for an acknowledgement.

Sliding windows are characterized by the ability to continue without waiting for an acknowledgement of the response, enabling parallel sending (improving performance).

  • How is the size of the sliding window set?

The receiver's acceptance buffer, which was previously available to the receiver, is told to the sender when answering, so the size of the sender's sliding window changes dynamically, depending on the size of the receiver's remaining buffer

  • The sliding window only moves to the right and stays in place

Move to the right: when the receiver application layer reads away, then the receiver buffer will be free to notify the sender to send data and give the sender an answer, at this time, the sender's sliding window will move to the right, the left pointer moves to the right, and the right pointer moves to the right (the pointer is the subscript here).

Do not move in place: when the receiver application layer is has not been reading, or reading slow, then at this time the receiver's buffer will gradually become full, at this time will tell the sender to accept the size of the remaining window of their own, at this time the sender's sliding window will not move, if the application layer has not been reading then it is (the left pointer and the right pointer do not move), and there is also a situation is that the sender sends out all the data at this time (the left pointer is always to the right, the right pointer does not move, the window is reduced)

  • Sliding windows don't run out of space because the essence is a circular queue (mimicked with an array), so there can always be space
  • Suppose the data on the leftmost side of the sliding window is sent out but lost, or the data in the middle of the sliding window is lost, then the sliding window will move or not move? (as shown in the figure below)

Assuming it is a packet loss, there are two types of packet loss, one is not sent at all, and the other is that the response from the other party is not delivered and is lost in the process

Then if the first pressure is not sent to the opposite side, this time, assuming that the back of the sliding window is also sent, but because ① is not sent over, the server receives the back of the server will find that the serial number is not right, this time to return to the client to confirm that the serial number is still the most beginning of (① subscript), so all the data segments returned to the back of the confirmation of the serial number is ① beginning of the confirmation of the serial number, to remind the next A sent is the beginning of the ① subscript, so at this time will find that the packet is lost, so at this time we have to re-send the ① data

If the second, the server received, but the client did not receive, then the sliding window will continue to move to the right, assuming that ① did not receive an answer, but ② received, and the confirmation number is ② subscripts, then the sliding window will move to ②, because if you receive the confirmation number is ②, the confirmation of the number of the number means that the confirmation of the number of the number of the number of the previous received, so the other party did not receive it does not matter I received, to the other party confirmation number is large, so the sliding window can continue to move to the latest position on the line. The other party did not receive it does not matter that I received, to the other side of the confirmation number is large, so the sliding window can continue to move to the latest position on the line!

So assuming it's ① that's lost, it's divided into two cases, depending on which side lost the packet, in the first case a retransmission occurs, and in the second case it skips the

In the center, the right position is also the same, because it will move slowly, ② ③ position will become the leftmost position as the window moves

  • What do I do when I run out of room in the sliding window?

The sliding window uses a ring queue (array simulation implementation) so there is no lack of space, it will only cover the sent && received answer area in front of it.

congestion control

Although there is a sliding window to send large amounts of data efficiently, there is a risk of congestion control if large amounts of data are sent at a time in the beginning

Because the network is also a resource, so assuming that the network is now a lot of users are sending a large number of messages, it may cause the network is very slow, the network is very congested, then also send a large amount of data at a time, can not be received in response, then this time to timeout and retransmission, and sent a large amount of data, because at this time, the network is congested, so it is a waste of system resources, and make the network even more congested.

So it leads to the concept of congestion control.

  • Sends start with a congestion window of 1
  • Receive an ACK window size * 2 times
  • The size of the sliding window each time =min(the size of the other side's feedback window, the size of the congestion window)

So the size of the congestion window is a slow-start, exponentially-growing strategy, and when it gets big enough, it's sized according to the feedback from the other side, because it's taking the smaller value

In order not to grow so much in chunks, so it can't be a *2x* exponential growth, and when it reaches a threshold, it becomes a linear level of growth

  • When TCP starts, the slow start threshold is equal to the window maximum.
  • At each timeout retransmission, the slow start threshold becomes half of its original value, and the congestion window is set back to 1.

When TCP communication starts, the network throughput gradually increases; as the network becomes congested, the throughput immediately decreases.

Congestion control is actually a compromise designed to improve network transmission efficiency.

delayed response

Sometimes the response may be deliberately delayed for a while before being sent, the reason for this is to increase the probability of an event occurring and thus improve efficiency

Suppose you are about to give each other a response, this time just sent out, just the application layer from the top to take away a lot of data, then suppose a little later, then just can give each other a response to a larger window size, the other side will be able to send more data, so you can be based on this, and so on for a while, it is possible to wait a little while for this time to take away the data from the application layer from the transport layer buffer, then can be fed back to the other side a larger window size

But you can't wait a little bit on purpose every time, if you wait a little bit on purpose every time, then it's not efficient, so there's usually a strategy for delayed answers

  • Limit: answer every N packets.
  • Time limit: one answer after the maximum delay time

bring a message to sb.

On the basis of delayed answering, we find that in many cases, the client server also "sends and receives" at the application layer.

This means that the client says "How are you" to the server, and the server says "Fine, thank you" back to the client; and the client says "How are you" to the server, and the server says "Fine, thank you" back to the client.

Then the ACK can hitch a ride, along with the server's "Fine, thank you" response back to the client.

When you wave your hand four times, the other party's ACK and FIN will sometimes answer, making it a "three wave".

The second parameter of listen

The second parameter of listen is similar to the maximum number of a queue queue, when the number of accepted connections is full, at this time can no longer accept, so you can only queue up and wait, this parameter is the length of the queue

At this time, the client has completed three handshakes, at this time, the server can not accept the connection so has not been accpte (until there is free), the client waits for the server to accept, the client state is normal, but the server appeared SYN_RECV state (because there is no accept), rather than ESTABLISHED state (in the view of the service side, there is no real) (in the server's view, not really complete the three handshakes)

If there is no handshake for a long time, then it is automatically shut down by the server

The tcp underlay allows half links of up to listen's second parameter + 1