Hi guys, We’re trying to run restore indices via d...
# all-things-deployment
a
Hi guys, We’re trying to run restore indices via datahub-upgrade container but for some reason it gets a read timeout from the database when it is almost done processing the last few batches. We’ve set batchSize to the default of 1000, and there are a total of 14000 rows in our aspects table. It seems to consistently fail when processing 12000 - 13000. Is there a config for increasing the read timeout of the database connection? Or has anyone else encountered such an issue before? The container we are running in has plenty of memory (5gb) so i don’t think its timing out due to some GC issues.
plus1 1
turns out that doing a much much smaller batch size gets around the issue. Leads me to believe that some of our aspect rows are extremely large, so when we do a batch of even 500, the connection to the db fails. I’m able to do the same queries from a different db client so in theory, we should be able to configure some setting to adjust the database connection settings used by the upgrade container.
d
Thanks for the reporting, Umair! we’ll get back to you shortly (cc. @dazzling-yak-93039)
a
I see a setting called
EBEAN_MAX_INACTIVE_TIME_IN_SECS
, i’m guessing this corresponds to some sort of read timeout. I’ll give it a try to see if it makes a difference.
a
Yes, this has happened before! In some cases we have even reduced the batch size to 10 if we were seeing concurrence issues or timeouts. That environment variable sounds like it's on the right track as well.
a
Turns out
EBEAN_MAX_INACTIVE_TIME_IN_SEC
doesn’t solve this issue. Could one of the devs maybe shed some light on what sort of setting could be exposed to get around these read timeouts we are seeing? It isn’t feasible to set a batchSize=1