Hey guys, how do you usually do to warm up SST lam...
# guide
c
Hey guys, how do you usually do to warm up SST lambdas - if there is a way to (sorry if it is a newbie question)
r
I typically don't. If you get a bunch of requests in parallel, you'll get cold starts for each lambda that's spun up to deal with those requests. So to keep all of them warm ahead of time you'd have to mimic multiple parallel requests or use provisioned concurrency. All of this costs money and adds complexity. IMHO it's better to work on keeping your cold starts to a the shortest time you can if that's important to you
c
Hmmm interesting!
a
Not an SST issue, just Lambda. Cold starts aren't bad if you: 1. Keep your deployed code small 2. Don't do too much work outside the handler (init code) 3. Avoid putting Lambda inside VPC if you can.
Oh, and avoid Java. 🤷 If I recall, it has the highest cold start penalty.
g
Doing more outside the handler is better technically because it gives you the full 2 vcpus instead of limited based on ram settings
and it will make the subsequent warm requests faster
a
As long as it is useful work. Don't want to do work before you get the request that may not need something. Never heard of cold start having 2 vCPUs regardless of memory size setting though. Reference?
t
Agree with Ross. Lambda warming is a somewhat complicated subject and it's a bit counter intuitive. Focus on cold starts by keeping package small. IF you do want to keep stuff warm, checkout provisioned concurrency. That's the only way to actually keep a bunch of capacity warm
c
thanks guys, that makes sense. will focus on reduce the packages then
t
lmk if you need help with analyzing your functions
m
One trick you can use in Python (I would assume it translates to other languages as well) is, if you have a big expensive package to import (Pillow, Pandas, Numpy, etc...) then wait to do that import to the inside of the handler. It will speed up your cold start, but it will move the time to do the import to the handler. It's helpful if you have multiple different code paths in one handler and not all of them need the expensive import. Better to keep multiple code paths on separate lambdas though. A better bet is to try and move expensive code (In terms of size and speed) to a non-interactive asynchronous format like in an SQS queue where the cold start time doesn't matter.
s
Warming Lambdas seems like it doesn't work. I have an event firing off every minute, and it still doesn't keep the functions warm
Provisioned concurrency can work, but it forces versioning, so you have to be mindful of that. And it can get costly quickly
c
really, @Sam Hulick? good to know
@thdxr that’d be great! though the side-project I’m working on using SST doesn’t actually need warm lambdas all the time (most of the lambdas are cron jobs) - I just asked to know if there is a way and implement it if so, but just to learn
s
@Carlos Daniel actually, forget what I said about function warming not working. my function warmer was not working right 😄 It used an env var to determine what functions to keep warm, and I realized when SST deploys, it wipes out the env vars
@thdxr is there a way to have a function’s env vars be left alone upon deploy?
c
great! i think you can set the env vars on your function props
even add default envs
don't know if that was your question
s
well, I don’t want to hardcode the env var anywhere, that’s the thing
but I don’t want SST to overwrite the env var when it deploys
I don’t know if that’s even possible. Lambda itself might just wipe out the env vars upon every new function deploy
c
i "hard code" my env vars but get the values from the actual .env file, like this
my ci has the values i want to be deployed so it actual works. would be good tho to know if there is another way
r
With my projects there are typically various groups of lambdas within a stack that have requirements for different env vars. So, rather than clutter up the stack definition with lots of specific assignments I have a helper function that looks like this:
Copy code
type EnvironmentVariableHolder = {
  [key: string]: string;
};

export function buildEnvVarObject(vars: string[]): EnvironmentVariableHolder {
  const output: EnvironmentVariableHolder = {};
  vars.forEach((envVar) => {
    const varName = envVar as keyof EnvironmentVariableHolder;
    if (!process.env[varName]) return;
    output[varName] = process.env[varName] as string;
  });
  return output;
}
Then, for each group of lambdas I have a separate file that contains the keys of the required env vars. e.g.
Copy code
const vars = [
  'CUSTOMER',
  'CUSTOMER_PUBLIC',
  'DATASET_IDS',
]
export { vars };
Then in the stack definition I can do
Copy code
import { buildEnvVarObject } from './helpers/buildEnvObject';
import { vars as envVarsCommon } from './helpers/envVarsCommon';
Then I can can build the environment with

environment: {
  ...buildEnvVarObject(envVarsCommon),
}
c
wow, great idea! will test it here