I’m setting up a ci/cd pipeline on gitlab for runn...
# help
I’m setting up a ci/cd pipeline on gitlab for running sst deployments, and it seems to be amazingly slow. part of the slowness is related to the docker builds that happen in the deploy stage, any suggestions on how to improve performance here? I’m not sure gitlab is able to cache docker layers.
this is for a python package then.
The slowness I see seems to be related to building these three containers on the fly ping @thdxr https://github.com/serverless-stack/serverless-stack/tree/7517b385a4899dab5d46fdd08554ceef371c7b70/packages/core/assets/python
@thdxr is there a way to do something similar to
sls package
to avoid running this in deployment stage, and instead package things ready in an earlier stage?
sst build
seems to require auth to AWS 😕
Would be great to have a workflow like: • Build create a package that does not contain specific stage information. • Deploy takes a parameter to use the package and starts by populating it with all stage information that is needed.
Or is there maybe a way to produce the python package up front and avoid the python dependency installer in deployment stage?
I’m also seeing that the bundler packages include boto3 and botocore which are always available already in the lambda runtime, not sure why this happens?
Hey @Jan Nylund, what if we added an option to not build the Python functions inside docker environments? That would require the CI environment to have the same architecture and Python versions as the Lambda runtime.
Woudl that work for you?
That would indeed work and put it on par with the serverless framework solution.
@Frank but I could not figure out how I can use the result of
sst build
to deploy? Is that currently not supported?
@Frank it would be great to do
sst build
sst package
) and then in the deploy stage be able to do
sst deploy --package <path-to-package> --stage <potentially some-other-stage-than-in-package>
This would allow to package once and deploy to many environments. I’m aware of that the package contains stage information, but I assume it’s not that much
sst build
process actually needs to be aware of the stage. For example, you can stage specific logic in ur code:
Copy code
new sst.Api(stack, "api", {
  customDomain: app.stage === "prod"
    ? "<http://api.domain.com|api.domain.com>"
    : undefined,
Depending on the stage, the generated CloudFormation from
sst build
is different for ie.
This is also true with Serverless Framework, you can’t run
sls package
against one stage, and deploy the package to another stage.
@Frank Yes, I’m very well aware. But I dislike it because most of the work done for packaging is the same. Except for the cloudformation part.
(I dislike it equally much for Serverless Framework) 🙂
One option would be to allow the packaging of lambdas to be done separately from the cdk part. So that it would be possible to run build for say dev + prod and it would create a subfolder for each stage but refer to the same packaged code.
This is running twice build, one for dev and one for prod and diffing the content.
Copy code
[x86_64] ± diff -rq .build-dev .build-prod 
Only in .build-dev/cdk.out: dev-demo-api-ApiStack.assets.json
Only in .build-dev/cdk.out: dev-demo-api-ApiStack.template.json
Only in .build-dev/cdk.out: dev-demo-api-StorageStack.assets.json
Only in .build-dev/cdk.out: dev-demo-api-StorageStack.template.json
Files .build-dev/cdk.out/manifest.json and .build-prod/cdk.out/manifest.json differ
Only in .build-prod/cdk.out: prod-demo-api-ApiStack.assets.json
Only in .build-prod/cdk.out: prod-demo-api-ApiStack.template.json
Only in .build-prod/cdk.out: prod-demo-api-StorageStack.assets.json
Only in .build-prod/cdk.out: prod-demo-api-StorageStack.template.json
Files .build-dev/cdk.out/tree.json and .build-prod/cdk.out/tree.json differ
Files .build-dev/sst-debug.log and .build-prod/sst-debug.log differ
Files .build-dev/sst-merged.json and .build-prod/sst-merged.json differ
@Frank but is there currently any way to use the build for later deployment stage?
even in gitlab I could run multiple builds after each other and then make use of the docker caches that are otherwise temporary.