Hi all, I want to enable <warming> for some of my ...
# help
Hi all, I want to enable warming for some of my lambdas but I need to give the lambda permission to invoke itself, is there a way we can do that with SST?
Use EventBridge, create an Event that runs at a schedule, will be much simpler and controllable.
That is what I am doing, this one just handles concurrency, instead of having 1 lambda warm with this I can have 5 if needed
oh, yeah this won’t work for concurrent executions.
If your timeout is more than a few seconds you could create a simple lambda function that takes a concurrency value as a parameter. Call it from an eventbridge schedule and have it make the number of calls specified in the event in parallel to the lambda you need to keep warm
Haha! Nested magic lol! 😂
Yep that is what this package basically does, it just uses the same lambda instead of a separate one
@Kujtim Hoxha In this example, I wanted an API invoked Lambda to call itself to start a longer running async task, while replying to the API request more quickly. First grant permission to the Lambda to invoke itself (or other functions)
Copy code
const api = new sst.Api(this, "api", {
      defaultFunctionProps: {
        runtime: lambda.Runtime.PYTHON_3_8,
        srcPath: 'src',
        permissions: [
          new iam.PolicyStatement({
            actions: ['lambda:Invoke*'],
            effect: iam.Effect.ALLOW,
            resources: [
Then in the lambda handler, you check whether it is a proper HTTP before invoking itself again asynchronously (Event type) to prevent an infinite loop:
Copy code
client = boto3.client('lambda')
def handler(event, context):
    if event.get('requestContext', {}).get('http', {}).get('method') == 'GET':
        payload_str = json.dumps( <Build your payload from the event input > )
        payload_bytes_arr = bytes(payload_str, encoding='utf8')
        response = client.invoke(
Ah Ok I did not think of giving the function permission to call any lambda function😂
Thanks a lot
Hey guys, are all of you guys warming ur Lambdas? I wanted to get a sense of how many ppl need this and decide if we should build it into SST.
I know a lot of people using serverless framework who do this but I’ve seen them cry when the bills come because they configured it wrong. It’s a wallet burner.
We don't, for a single user to get a response time when they load the app of an extra second or so that's never even really noticed, it's just not worth it. Aurora Serverless DB on the other hand, that's a different story takes an age to wake up
Yeah Ross is probably right I am mostly testing it out, I am assuming that when there is a consistent user base there will be enough warm lambdas around