What are some good arguments for breaking your app...
# random
t
What are some good arguments for breaking your app into multiple stacks vs one big stack (aside from resource limits)
s
deploy speed (CFN change sets are sloooww when stacks are large)
a
Define better contracts and dependencies.
c
Also curious, what rules of thumb are folks using to decide when and how to break up stacks? E.g. one for each autonomous/discrete service or another way of aligning stacks for how resources are being accessed?
s
@Clayton I break mine up based on two things: dependencies, and concern. so I have a
core
stack upon which everything else depends.. it contains Cognito, DB, S3 buckets, etc. the other stacks are broken up by concern/purpose. so there’s one for the HTTP API, one for the media processor, one to handle webhooks from Chargebee, etc. (actually, the API is broken up further because I hit CloudFormation’s resource limit - so there’s
api-base
which contains the actual API Gateway, then separate stacks for groups of routes)
t
I felt like multiple stacks were slower than one stack but haven't explicitly paid attention
c
Thanks @Sam Hulick
s
oh man, when my API was one huge stack, CDK would sit there for 5 minutes due to whatever it was doing with the CFN change set. it was terrible.
so the SST CLI just sat there
b
at my org, we use Domain-Driven Design which tends to make these decisions relatively easy. In practice I do take @Sam Hulick’s approach - but use DDD to figure out what “concerns” are. For clarity, each domain usually does have a “core” stack, but we try to keep application resources with it’s own application stack, and only truly shared resources (like event-stores, event-buses etc) in it’s own shared stack all applications in domain depend on.
j
I split my stacks based on dependencies/types/coupling and domain/context/concern. For instance, each microservice has its own stacks, and shared infrastructure and compute belong in different stacks. All core services behave in the same way as microservices. This way, changing microservice A only updates stacks from A. And changing the functions in service A does not affect microservice A's shared infrastructure stack. Also, changing core stacks might affect all services.
@Blake E Yes, that's it.
t
I use DDD as well and have been splitting my stacks accordingly. But I can recreate the concerns and dependencies inside a single stack using functions and still maintain separation
j
@thdxr disadvantage: You have to handle dependencies in the deployment sequence when you split stacks. Unless the rules for sharing information are well defined.
c
Thanks guys, that definitely makes sense. I’m still coming up to speed on serverless best practices that can keep things performant and resilient. I’ve been liking John Gilbert’s take on creating ‘autonomous services’ within an event-first system and that approach to stacks seems to line up the same way. https://medium.com/@jgilbert001
j
What about multiple team members working on core / services, some updating their infra and others tweaking code. If it's all in one stack, then it's all in one file, then it's a mess to synchronize, right ? Synchronize as in update, commit, push, deploy.
s
I’m not sure if my setup is ideal, but I think it works the way you all are describing. my microservices like billing webhook & media processor can be updated without affecting any other service. but changes to the core stack may cause other stacks to be updated in turn
(the core stack would very rarely be updated once we launch)
t
Here's what I mean by achieving the same seperation of concerns without using multiple stacks: https://serverless-stack.slack.com/archives/C01HQQVC8TH/p1630939124277300?thread_ts=1630610359.238000&cid=C01HQQVC8TH
I recommended a single stack here because it's a deployment per customer setup and number of customers * number of stacks seemed like a lot to deal with
j
Once you automate, "a lot" changes in meaning.
t
A single stack doesn't have to be a single file
s
single stack isn’t doable for some people though (I would’ve far exceeded the resource limit)
t
Yeah I'm looking for reasons outside the resource limit. Slowness as you cited is a good reason
Having been using these things in practice, yes theoretically separate stacks are ideal. But in practice CF can be glitchy when things go wrong and it's a lot harder to fix issues (those of you that have run into that exports issue) when there are multiple stacks. It feels wrong not to use multiple stacks so I'm trying to see what objective advantages there are.
j
Well, in the other thread, maybe the options were too tightly coupled with the non-optional parts to be defined in another stack. But then again, we get back to the question of type and coupling, that is domains, and shared vs. specific, etc.
You know what this reminds me of, @thdxr, your question about single table design. Right ?
t
I don't think single stack forces you to not separate concerns as you can see in my example. It looks like multiple stacks but under the hood only uses one stack
b
going to +1 for @thdxr’s approach, even if I don’t follow it, more important than BEING multiple stacks, is separation of concerns that your team can understand, imho.
j
Have you ever used more than one stack in a single App ?
t
I currently use multiple stacks for my projects
Speaking of DDD, I also am starting to wonder if I should colocate my cdk code with my service code vs having it all in a single folder at the root. Feels like it makes sense to have a folder with the lambda code + how it's deployed together
j
Does anybody else think this question has many faces ? • mono-repo with many packages or multi-repo with one or a few packages (monorep or not) • mono-db with many entities or multi-db with one or a few entities (single table design or not) • ... mono-app with many stacks or multi-app with one or a few stacks (the current issue)
s
I don’t think there’s any right answer.. no one-size-fits-all. just depends on the project itself, and what makes sense as far as structuring it to be easy for the team to grasp
t
Yeah this is the same thing that always comes up everywhere 😄
b
@JP (junaway) I’d def. recommend doing DDD audit of your application/services (if they already exist) or capabilities map if you’re pre-build. Take these questions into there, and you should find answers that match your team / use-case
remember, boundaries, stacks, service responsibiliities, etc - these things are pretty expensive to change later if you make mistake, so take your time and I always air on the side of caution and make services/applications/stacks bigger at first. Unless I have conviction and the separation is obvious, I resist early segregation.
monoliths are way easier to build early on, and when you have conviction, can break up later
j
I'll run one 🙂 Thanks I must be lucky that our early segregation worked, then 😄
Stacks are units of deployment, right ? So... what if there are some shared stacks ? Like two deploys of Stack A, lets call them A1 and A2, using one single deploy of Stack B. In this case, there is some meaning to the partition performed by splitting the stacks.
t
^ I have this pattern currently (shared auth system across multiple products). But ultimately I could have modeled at a single stack and just shared the underlying resources. In my case I have 2 seperate repos though so not possible
j
For instance a core Stack like we mentioned before. Or perhaps some SaaS analytics Stack that is used across all tenant, each of which may have it's own stack.
In your case, you could have because the coupling is static relative to the deployment of your infrastructure: i.e. you have some products and that's it, once they're deployed, they're deployed. But what if you deployed your product stack(s) once for each new tenant ? You couldn't ever do that with a single stack could you ?
I mean, if your shared auth system is part of the same stack, then it would be replicated as well.
t
I was pretending each of my products = a tenant. Not that this is what I'd suggest but technically possible to create a construct that represents a tenant and create an instance of it for each customer in the same stack. This is a bad idea though because you'll hit resource limits + you lose the ability to cleanly deprovision an entire customer by deleting a stack
a
Wou, I have like 200 messages to read now.
t
tldr it was just me being weird again
a
@thdxr you never like simple stuff, right? haha.