If a request is timing out you probably want to as...
# general
t
If a request is timing out you probably want to assume it's a dud after a few seconds as dynamodb is no slouch. You can set that with the aws-sdk. 150 writes is not much unless the objects are massive though
r
We run them all in parallel, they don't time out themselves, the overall lambda timeout is what's getting exceeded
Cloudwatch metrics suggest they're taking 5ms
t
What percentage of the items are successfully written to the table?
r
About 80%
t
And do you have a timeout value set when using the aws-sdk
Wait which language are you using?
r
Not explicitly, no
Nodejs
We have connection reuse turned on and maxSockets effectively set to 50 as per the docs
V2 sdk
t
I will try to recreate because I'm interesed. We write quite a lot of data we pull from 30rd party api's to dynamo but we put it into S3 first as it's timeseries but low frequency and that gives us a cheaper historic record
r
Thank you, that's really appreciated. We're using on demand capacity too
Our lambda runs every 5 mins with a 4.5 min timeout
Looks something like this:
Copy code
async someFunction(){
  const savePlanSetPromises = someArray.map((dataset) => {
    return SomeDataLayer.save(dataset).catch((err) => {
      log.error(`Some error': ${(err as Error).message}`);
    });
  });
  
  // Save each activity
  let saveCount = 0;
  const saveActivityPromises = activities.map((activity) => {
    return SomeOtherDataLayer.save(activity)
      .then(() => {
        saveCount++;
      })
      .catch((err) => {
        log.error(
          `Some Error ${(err as Error).message}`,
        );
      });
  });
  
  await Promise.all(savePlanSetPromises);
  const saveActivityResults = await Promise.all(saveActivityPromises);
  return saveCount;
}