I am programming a script in bash
to retrieve ones querytxt
from the database in Redshift
. The problem is that to get those querytxt, you have to order the sequences previously to get it out correctly.
In total there are 5250 queries, of which I already have their QueryID and thus filter them correctly. This is my script currently:
RUTA=/monitoring/recreacion_ra3
contador=1
bloque=1
while read query
do
if [[ $contador%250 -ne 0 ]]
then
LC_ALL=en_US.UTF-8 LD_LIBRARY_PATH=/usr/lib/ psql -h ${RSHOST} -p 5439 -U ${RSUID} ${RSDB} -t -A -c "select listagg(text) within group (order by sequence) as fullquery from stl_querytext where query = $query;" >> $RUTA/lista_query_${bloque}.csv
contador=$((contador+1))
else
bloque=$((bloque+1))
fi
done < $RUTA/listado_idquery.txt
As can be seen, the query that retrieves the querytxt and orders them is:
select listagg(text) within group (order by sequence) as fullquery from stl_querytext where query = $query;
The execution has gone well until it has reached query number 248, whose error has thrown me the following:
ERROR: Result size exceeds LISTAGG limit
DETAIL:
-----------------------------------------------
error: Result size exceeds LISTAGG limit
code: 8001
context: LISTAGG limit: 65535
query: 59919376
location: _rds_bin_padb.1.0.22169_data_exec_72_3578893640_3e87a9c5c65d0636877214dd854e3a6255978936_0.cpp:150
process: query0_799_59919376 [pid=105429]
-----------------------------------------------
Apparently, listagg has a maximum size and if it is exceeded, it gives an error. This doesn't bother me because losing some of the 5250 necessary queries is not a problem. The issue is that when giving this error, the script stops.
How can I handle errors at runtime so that if any query can't be rescued, I can move on to the next one without a problem?