What about requests that take a long time? For example, those with large amounts of data. Business logic takes so long to process that the response times out
The timeout response here refers to theReadTimeOut
This is the period of time between the end of sending the request content and the beginning of receiving the response data. Normal HTTP requests may not respond to a timeout during this period.
HTTP Chunked Transfer Encoding flushes each block of data as it arrivesReadTimeOut
. Server push events (SSE) in which the server automatically sends a heartbeat message to refresh theReadTimeOut
. Due to this chunking or streaming approach the amount of business and data processed per message is smaller and timeouts can be reduced.
Both of these just allow the requestor to see the results as soon as possible, the data comes out once and is pushed once, it doesn't reduce the time it takes for all the data to be processed. And js can receive a callback to our code once, print or process once, rather than receiving all all the data and then give control to our code. To return data in batches, it requires the server-side business logic code not to process all the data at once, but to process or query in batches.
http message chunking
HTTP/1.1 200 OK
Content-Type: application/json
Transfer-Encoding: chunked
4A
[{"id": 1, "name": "Alice"}, {"id": 2, "name": "Bob"}, {"id": 3, "name": "Charlie"}]
4C
[{"id": 4, "name": "David"}, {"id": 5, "name": "Eve"}, {"id": 6, "name": "Frank"}]
42
[{"id": 7, "name": "Grace"}, {"id": 8, "name": "Helen"}, {"id": 9, "name": "Ian"}]
0
- increase
Transfer-Encoding: chunked
The header indicates that this is a chunked transmission response - 4A, 4C, and 42 are the byte sizes (in hexadecimal form) of the individual blocks, corresponding to the lengths of the first, second, and third blocks of data, respectively.
- JSON data immediately follows the block size.
- Each block is followed by a CRLF.
- A 0 followed by a CRLF indicates the end of data.
The problem with this one is what form does the server send and the browser receive? What is the behavior? I need to try it out.
- server-side
[HttpGet]
public async IAsyncEnumerable<string> Get()
{
var dataList = new[]
{
new { Id = 1, Name = "Alice" },
new { Id = 2, Name = "Bob" },
new { Id = 3, Name = "Charlie" }
};
foreach (var data in dataList)
{
// Analog data processing delay
await (2000); // Analog processing time
yield return $"ID: {}, Name: {}\n";
}
}
The server returns an asynchronous stream. Using theIAsyncEnumerable<T>
, kestrel then adds chunking fields to the response header. Specifically kestrel internally uses theawait foreach
Iterate over this method, waiting for each block of data to be generated and pushing the response data one at a time
- browser (software)
async function fetchData() {
try {
const response = await fetch('/data');
if (!) {
throw new Error('Network response was not ok');
}
const reader = ();
const decoder = new TextDecoder('utf-8');
const list = ('data-list');
while (true) {
const { value, done } = await ();
if (done) break;
const textChunk = (value, { stream: true });
const li = ('li');
= ();
(li);
}
} catch (error) {
('Fetch Error:', error);
}
}
To see how it works in practice, there is a 2-second delay every time you read the response body
Looking at this time parse, the first read encounters the first(2000)
Then it starts responding to the data. After the green part is gone, the browser gets to respond to the first part of the data and goes to the blue part.
This only solves the problem of slow transmission by allowing the receiver to see the data as early as possible, but does not speed up the full data response completion time.
SSE Streaming
Streaming server needs to set up a specific response header, and then keep the http connection, write data directly to the response and push, rather than return data and release the connection.
- server-side
public async Task<IActionResult> Stream()
{
= "text/event-stream";
("Cache-Control", "no-cache");
("Connection", "keep-alive");
// Cyclical push data
while (true)
{
// Push simulation data
var message = $"data: {(new { message = "Hello, world!", timestamp = })}\n\n";
await (message);
await ();
//1SInterval re-push
await (1000);
}
}
- browser (software)
const eventSource = new EventSource('/api/sse/stream');
= function(event) {
const message = ();
const messageElement = ('div');
= `Message: ${}, Timestamp: ${}`;
('messages').appendChild(messageElement);
};
= function(event) {
('Error:', event);
};
However, SSE can only add parameters to the request address, there is no way to define carry request headers, such as Authorization.
HTTP range request
Scope requests don't seem to be handled directly by us manually, but are done automatically by the browser and the server. For example, large file downloads are disconnected. This kind of don't care about timeout issues, and it seems like it shouldn't be included in this discussion.
But what I'm curious about is the flow of range requests. How does the browser decide whether to send a range request or a normal request when downloading a zip? And how does the browser know the range size at the very beginning? There seems to be a probing phase for this to work, so how do the browser and server interact? If there is probing, how does the server know that this is a probe request and not a download request?
There is indeed a probing phase usinghead
method instead of the regularget
post
The file size information will be obtained only without downloading the content.
- HEAD request sent by the browser
HEAD / HTTP/1.1
Host:
- Server response
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 100
Content-Type: text/plain
But the detection stage is not always present. When we click on a link, the browser doesn't realize it's a large file. So the browser usually sends a get request to download the file directly, and learns from the header and records whether range requests are supported or notAccept-Ranges
and total file sizeContent-Length
This will allow you to decide if you can switch to sending range requests when you click download again after pausing the download.
- For static files, usually the web server has a built-in implementation of responding to scoped requests.
- For the file download provided by the controller interface, we need to implement the range download logic of this action by ourselves, i.e., take the header range field, calculate the offset, set the header, respond to the code, and return the corresponding part of the data.
So when the controller interface considers breakpoints, it is necessary to add arange
Branching out. The first branch is for full downloads, the second branch is for range downloads
[HttpGet]
public IActionResult GetFile(string filePath)
{
var fileInfo = new (filePath);
var fileBytes = (filePath);
//Scope request branch
if (("Range"))
{
var rangeHeader = ["Range"].ToString();
var range = ("bytes=", "").Split('-');
long start = (range[0]);
long end = > 1 ? (range[1]) : - 1;
if (start >= || end >= || start > end)
{
return StatusCode(416); // Requested Range Not Satisfiable
}
var filePart = ((int)start).Take((int)(end - start + 1)).ToArray();
("Content-Range", $"bytes {start}-{end}/{}");
("Content-Length", ());
return File(filePart, "text/plain", enableRangeProcessing: true);
}
//Full download branch
return File(fileBytes, "text/plain");
}
To make it even better and provide a detection interface for some downloaders, one would also have to implement ahead
Method. But this is probably rarely used.
[HttpHead]
public IActionResult HeadFile(string filePath)
{
var fileInfo = new (filePath);
["Content-Length"] = ();
return NoContent(); // 204 No Content
}