Hey <@U01JVDKASAC>, Do you have an update on this...
# sst
s
Hey @Frank, Do you have an update on this open issue (speeding up CloudFront update/invalidation)? We're struggling with fairly slow deployments on Seed (~20mins) and about 50% of that time is spent on the 'Deploy' step. Of this, it seems about 3.5 minutes is spent on
AWS::CloudFront::Distribution
. Anything that would help speed up deployments would be hugely helpful for us.
f
Hey @Sam Wrigley there might be something we can do with
1. updating the distribution to point to the new S3 folder
Let me discuss about it w/ the team and keep you posted.
As a side note, what do you feel about serving out static sites from API Gateway + Lambda instead of CloudFront. It’s been requested before. So you don’t get the benefit of CDN, but the deploy time is much faster.
s
Thanks, @Frank!
In terms of serving static sites from API Gateway + Lambda instead, I'm not sure that would actually help us much as the vast majority of our pages aren't static? Even if they were, I'm not sure we'd want to sacrifice performance for faster deploys to be honest.
f
@Sam Wrigley Agreed. @Derek Kershner saw ur down vote as well. Sounds good guys, that’s aligned with our thoughts too. Just wanted to run it across u.
s
Thanks, @Frank! Going back to your first suggestion:
updating the distribution to point to the new S3 folder
Is that something you're still looking into?
f
Currently, every time you deploy
StaticSite
, the site is uploaded to a new folder inside the S3 bucket. While this makes each deploy “atomic/clean”, as in u don’t have old files from previous deploys, this also makes the deploy A LOT slower. The slowness comes from updating the CF Distribution to point to a different S3 origin.
What do you guys feel about removing this “atomic/clean” behavior, so each deploy uploads the site to the S3 bucket’s root folder, overwriting the previous deploy? @Sam Wrigley @Derek Kershner
d
My opinion is that this is a new construct (
DevStaticSite
?). While this may take some busy work to create a base class and such, it saves the question of
how do we switch from one setting to the other?
, which seems difficult to do perfectly and more work. It also makes intention VERY explicit. As to the merit of the concept, I get why this would be attractive in certain use cases, but I am not sure the goal will be fully achieved, because you will still need to invalidate the cache, and that seems like at least part of the time sink. If I had to call which took longer, I would have said that invalidation takes longer than switching origin targets, but I could be wrong.
More likely path, imo, is to do the
DevStaticSite
route, and remove
Cloudfront
entirely. That seems cheap (for low traffic volume and lots of deploys), fast (would be almost instant, S3 only), and to accomplish the goal most effectively. @Frank
f
Thanks @Derek Kershner, could you clarity at what point of the deployment will the site be temporarily down?
this deployment method will take the site down temporarilyDK’
^ deploying to bucket root is what CDK’s s3Deployment construct do
d
It can be pretty short, but there is always a small amount of time when cloudfront has updated its origin, but the actual code has not. As in, lets say you deploy a react site and it is using a static with code
x
(most frontends attach a code to deploys to keep em unique). If you deploy a new site with code
y
, deleting the files with code
x
, and a page is currently cached with code
x
as a requested static, the site will be down for any users that show up in between the origin update and cache invalidation.
This is less the case with going to S3 directly, as there is only one thing to update.
@Frank
f
Right, but that won’t happen if we don’t remove code
x
. For example, the old
index.html
will be replaced, since it doesn’t have a code. Old js/css files that have a code will not be deleted.
d
well, then it might just get very big, but I like that a little better
Fixed my above message, but my strategy would still be the same.
f
Current
StaticSite
has a similar flaw, users can have
index.html
open from a previous deployed version. Now they try. to request a js file with code
x
, but since the CF now points to the new deployed folder with code
y
, the user will get an error.
The proper way to handle atomic deploy (I think this is what Netlify does) is that the
index.html
are deployed to subfolders, and the js/css files are deployed to a shared folder with all historical versions. ^ that’s how they can give u a unique url to each historical version of
index.html
d
That strategy seems like it would work, though it might be a little taxing to support a bunch of frameworks.
Regardless, I think you guys should give yourself breathing room, and implement two constructs, one where people really care, and one where they do not care so much, so that you can act accordingly in both cases.
One size fits all is going to be increasing tough with competing priorities.
f
Understood, yeah that makes sense. Let me share this with @thdxr and @Jay.
Hey @Derek Kershner, after discussing w/ the team, we decided to go w/ removing the atomic deploy concept for now and deploying directly to the bucket root. Also adding an option to have CFN not wait for CF invalidation to complete. The main drive for this is: • current implementation (ie. updating Distribution’s origin) adds 2-5min to deploy time • current implementation can cause intermittent down time for low traffic sites What we want to do down the road is: • make
StaticSite
more flexible over time to handle custom deployment strategies (u can deploy to subfolder; u can invalidate certain files; u can prune old files, etc); • and invest in framework specific constructs like
ReactStaticSite
and
ViteStaticSite
to implement atomic deploy properly
Really appreciate the insights you shared on this!
d
Makes sense to me.
d
Yikes! I must have missed this thread. I had a PR that made deployId optional and the fallback was to empty the whole bucket just before transfering all the files. Can we not make that an option? Right now this change has caused a larger issue for us than deploy-times, and that is that now all our deploys have leftover files from previous deploys. @thdxr @Frank