Anyone here using a monorepo of multiple sst appli...
# help
a
Anyone here using a monorepo of multiple sst applications with npm? Could you please direct me how to create a simple microservices workflow where every service is a separate sst app?
s
Need this too. We have 29 serverless.yml stacks, fullstack. Looking to migrate. Don't want one root app, need decoupling
a
Yep, I would start decoupling now before I start implementing deployments for multiple regions. Will make life simpler.
s
Yeah. Got teams with different needs aswell. Some backend, some front end. Looking to use aws cdk extensively for IAM. We also have vpc, and RDS. Complexity everywhere
o
I have split my SST into two stacks, should work for more - I basically have a copy of Dax’s typescript starter folder structure under each top level folder
a
@Omi Chowdhury do you mean 2 apps?
s
Two stacks, or two apps? I have probably 14 stacks, but want to also decouple release cadence
c
@Simon Reilly why don't you want one root app?
o
Two sst.json’s, each with multiple stacks. The only reason I have two apps is because I have a hybrid SLS/SST app and need some SST resources to deploy before the SLS stacks, and some after
p
We have about 7 sst stacks right now and looking to grow in our mono repo. We use https://rushjs.io/ which I would highly recommend. It blows away other mono repo patterns and just works.
s
@Chad (cysense) we need to grow. If we want to deploy say, a couple layers. • IAM, CI roles and users, needs super user • Shared infrastructure, vpc, event bus, web application firewall, S3 antivirus scan • Microservices, developed by teams • Frontends, developed by teams I guess we don't want to couple things. E.g. micro service team makes no fronted changes, so don't want to wait ten minutes for a cloudfront invalidation. Microservices teams should not be able to break the vpc by accident, and bring everything down. If it was 100% my choice I would put each team in their own aws account tbh
@Patrick Young interested in rush. We are currently using yarn 2
p
Yeah my goal is to have every dev in their own AWS org with limits on price etc. Just so someone doesn't go off the deep end 😆 . Getting our company to do that is another story... Rushjs treats every package like it is its own repo (symlinks to shared global node_modules). It allows you to lock pnpm (awesome tool although npm is "fine" now) versions, node ranges, dependencies ranges (we just locked it so everyone has the same version in every package.json). There are a ton of tools around management with multiple teams (which I have not fully explored). It forces consistency across your packages etc. I just like that every package is its own "repo" so you just drop sst in the root of that package and it just works 🤷 . We are using github for change detection (which rush helps with as you can trigger tests to run on dependencies etc).
For local development you have stuff like this:
Copy code
"deps:watch": "rush build:watch --to-except @insert-this-package-name",
Insert with whatever you called that package 🙂
If you make changes in a dependency, it will retrigger live reload etc.
I've played around with yarn / yarn 2 / lerna / pnpm workspaces / probably others and rushjs is the first one I feel like I didn't have to "fight".
s
That's really good to hear. I will check it out. Our tooling isn't something I have complete control over. However, given the positive reviews I'll keep it in mind. Especially if we hit any of the problems you have described fixes for. Example, I add SST to our yarn 2, and typescript 4.4.4 patch fails to install ☹️
c
@Simon Reilly Thanks for sharing. I was asking because we went through the same decision recently and decided to use one app. My understanding is that the intended CDK pattern is to have one app per 'app' and then the multiple stacks within it. We are currently moving the other way to you. We are moving towards a monorepo with a single CDK app and then stacks within the app. So for example we have a shared resource stack within our app which hardly ever changes. We also have our microservices and frontend teams working within the same repo and app.
I guess we don't want to couple things. E.g. micro service team makes no fronted changes, so don't want to wait ten minutes for a cloudfront invalidation.
On this point, this is probably our biggest issue at the moment. I believe there are CDK specific ways to optimise for this. Specifically what we do is: 1. Don't use nested stacks so developers can deploy only the stacks they are working on.
cdk deploy frontendstack/*
2. Use the hotswappable feature so if there is just a change to microservice code thats all that is pushed (this has gotten our deploy times down to less than a minute). 3. Adding checks to ensure that only the right stacks are synthed (https://github.com/aws/aws-cdk/issues/11625#issuecomment-940887429) 4. Used cached assets (still trying to figure this out)
Microservices teams should not be able to break the vpc by accident, and bring everything down.
Yeah great point. This is not something we have faced/considered yet but I think our workflow should prevent this. We have developers developing in there own AWS accounts, we have a sandbox account that developers can collaborate in and access console, and then we have a develop and prod account which will only be updated via CI/CD. Hopefully if a developer breaks a VPC it should be caught in sandbox or develop.
s
That's a lot to think about. For now, I think I would stick with separate apps. But I will do a proof of concept on both. 5hanks for your points. I will try this out 👍