From googling I see examples loading secrets into ...
# help
t
From googling I see examples loading secrets into environment variables - this doesn't seem secure to me and seems like it negates the point of using secret manager.
s
I think this depends. In the past I have used SSM parameter store with secure strings. You will run into the rate limit if you fetch secrets for every lambda invocation, middy has a nice caching implementation for this use case: https://middy.js.org/packages/ssm/
t
Yeah this is what I was worried about but realized I could store all my variables in one secret so it's only 1 fetch. And can do it outside the function body so it would just be once per cold start
It's weird there's so many online examples of people copying from SSM -> env variable, seems definitely wrong
s
It depends how secret the secret is IMO. If it is something like a username, then maybe it can be plain text stored. If it were a database password it might look like a bit of a smell. I think it is common to copy from ssm, into the lambda env at deploy time, serverless framework even supports this through their templating language https://www.serverless.com/framework/docs/providers/aws/guide/variables/
j
d
@thdxr https://serverless-stack.slack.com/archives/C01JG3B20RY/p1623769542372000 I operate under the assumption that our environment itself is secure, so retrieving something from SSM and sticking it into your env vars is associatively secure.
what did you end up going with?
t
The problem is if someone has access to the aws console they can look at it freely and accessing it is not logged
I'm going with #3 from the post jay shared
This is ugly with the use of deasync but I really didn't want to deal with async code everywhere and wanted everything to pretend loading data from ssm was synchronous
Copy code
import { SecretsManager } from "aws-sdk"
import deasync from "deasync"

const sm = new SecretsManager()

const DefaultConfig = {
  BUCKET_RENDER: "",
  URL_REST: "<https://localhost:1313>",
  URL_WEB: "<https://localhost:8080>",
  MAIL_ADAPTER: "console",
  SENDGRID_API_KEY: "",
}

type ConfigType = typeof DefaultConfig

export const Config = {} as ConfigType
function load(cb: any) {
  sm.getSecretValue(
    {
      SecretId: "dev/node",
    },
    (_err, response) => {
      const config = JSON.parse(response.SecretString as string)
      for (let [key, value] of Object.entries(DefaultConfig)) {
        Config[key as keyof ConfigType] =
          process.env[key] || config[key] || value
      }
      console.log(Config)
      cb()
    }
  )
}

deasync(load)()
uses what's in process.env (for local override) otherwise uses what's in ssm otherwise uses default
s
you may be able to use:
Copy code
const resp = await ssm.getSecretValue({
      SecretId: 'dev/node'
    }).promise()
I think alot of the aws-sdk methods support returning the promise
t
The issue here is turning something async to sync. In that situation a lambda could fire before the config has loaded so I'd have to await it everywhere I reference the config
Oh does AWS lambda support top level await?
t
Unfortunately there's an incompatibility with SST (cc @Jay @Frank) Using top level await requires use of
esm
which means setting
type: "module"
to true. This makes it so require statements cannot be used. As part of the deploy process a script gets generated at
.build/run.js
which does use require statements and is unable to run. Possibly can fix it by renaming it to
.build/run.cjs
f
I see. Just opened an issue to track this https://github.com/serverless-stack/serverless-stack/issues/451
t
awesome, I'm doing some investigation to see if I can suggest something more specific