I am trying to reduce the number of DB queries in a process that I am doing, this process every X time will have to do a large number (in the order of 4000 - 8000) of INSERT or UPDATE.
I have already managed to reduce the INSERT part by grouping them into queries of 1000 insertions in the following way:
INSERT INTO hist_proceso(pc, fecha_comp, resultado, historico) VALUES (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?) ........ ;
In this way, reducing from 4000-8000 inserts to only 4-8.
I am having the problem when trying to do it with the UPDATEs since I cannot spend a query like the following:
UPDATE hist_proceso SET resultado=0, historico='aa';
Since each element to update will have different values.
The "result" column may have a value of {0,1} and in the "historical" column the values will be concatenated in this way: HH:mm/0;HH:mm/1;HH:mm/0; HH:mm/0;
Any suggestion?
Unfortunately you can only optimize
INSERT
s that way,UPDATE
you can only optimize them by grouping by assignments.Namely:
So you will assign 0 to the field
resultado
to all the records whose field is equal tohistorico
'aa', 'bb', 'cc' and 'dd' and the value 1 to the records whose fieldhistorico
is equal to 'ee', 'ff', 'gg' and 'hh'.I have reduced eight queries to just two with this type of grouping.
You could also use an
UPSERT
orINSERT ... ON [CONFLICT] DO UPDATE ...
:But you have to make sure that the primary key is made up of the right fields and you have to exclude them from the
DO UPDATE
.During the insertion, a table called is created
EXCLUDED
where the values that have not been inserted are temporarily stored to be used during the update. In this case we update with the new values and ignore the old ones.