Some time ago I have been asking myself the following, and I have searched but I have not found an answer to this.
The question is.
What is the limit capacity to send data, both for a request and a response through ajax with the post method?
Ex:
HTML Ajax Controlador
(Dato1) ---> (Dato1) ---> (Dato1) Request
(Dato2) <--- (Dato2) <--- (Dato2) Response
How much is the maximum capacity that ajax supports for sending Data1 and Data2, through post.
I appreciate any comment, greetings!
A POST request, from both the client's and server's point of view, is exactly the same as any other type of request. There is no distinction between GET, POST, PUT, or any other (except perhaps file upload , which is distinguished by its
Content-type
).Leaving aside the client, all servers have a limit set for the maximum size of sending for any type of request. In short: it depends on the configuration of the server.
For example, according to the Apache documentation , on this server you can configure a limit between 0 and 2GB.
For the customer, there is simply no limit . It tries to send data until it receives some kind of error from the server. For example, in AJAX themes, what you send must already be in memory , so it does not present problems. And, for uploads , the size of the file is known before the upload starts.
EDIT
I didn't plan to go on too long in this answer, but anyway :-)
Technically speaking, a request (of whatever type, it doesn't matter) is limited to establishing a socket (if there isn't already one) between the client and the server.
Through said socket , the information that can circulate is unlimited until one of the parties (client or server) closes it .
Now, HTTP establishes a clear protocol in which bytes must circulate, and in what order, so that both parties understand each other .
A typical request
POST
has this format (not exact, but to get an idea):Voucher. As I said, there are no theoretical limits , but there are practical limits . Since the information that circulates is not instantaneous , the server must store it somewhere until it is complete , in order to decide what to do with it.
These decisions do not always require all the data: for example, an intermediate proxy only needs the URL to decide what to do. To retransmit the rest of the data, you can use a small internal buffer , but you don't need to store all of it, since you won't use it and just retransmit it to its final destination.
This is the reason why some data have different limits than others. If the requested URL is very long, a possible intermediate proxy could not process the request , since it does need that specific data to make one decision or another. Let's think of small devices, with reduced processing capacity, and little available memory.
Now, let's think about the server. Depending on the type of data (obtained from the
Content-Type
, included in the headers), it may be necessary to obtain the entire data before it can be processed.Unfortunately for our beloved server, the data to be sent can be quite large. As we said before, you have to wait to have them all before processing them. How do we do it ? Well, using a memory block, in which we will store all the data, as we receive it.
And can we not do it in another way? For example, using a buffer on disk, ...
Sure we can. But... let's think about performance . If for each request an undetermined number of bytes have to be written to the disk , and then make several extra reads to process it, ... that with each request from each client ... the performance of the equipment would barely be for 10 or 20 simultaneous connections. A calm day.