Hello, I've got a question regarding SST working s...
# guide
h
Hello, I've got a question regarding SST working side-by-side with existing SLS resources. So we want to release our current SST infrastructure to replace SLS, but we want to keep some of the existing resources untouched (like DynamoDB as an example) but still be able to update or edit some stuff if needed. What would be the best approach for this? Since in Dev we want to have everything deployed fresh via SST and be separated by stages, but in Prod we want to keep some of the existing stuff as it is but still have it available in our SST code. Is there any "best practice" approach to achieve this in one code base? Or is it literally a matter of
if
statements to check stage and region (for example). I've already implemented some CDK
.fromArn
(example) methods which import existing resources, but I'm curious what would be the best approach to this. Thanks
s
I'm interested to hear what people are doing to integrate existing infra into new projects (e.g. a database, VPC's, etc). My current understanding involves using ARNs to import/reference existing resources. I'm not sure if this is a good long-term solution, or if there is a different migration path to get existing resources from one tool (e.g. serverless, cloudformation, terraform, etc) to another (SST and CDK).
j
I think for some existing resources that might be the best way, especially if it’s different for prod vs dev. I’ll also throw in this doc in case you guys haven’t seen it: https://docs.serverless-stack.com/migrating-from-serverless-framework
s
For me I think a big part that is terrifying about migrations is the potential to lose the stateful stuff. For that reason I would really consider a rebuild of the infrastructure in full, if you get the chance 👍 If the stack isn't complex then rebuilding from scratch will be a worthwhile time investment, because the cdk/sst will streamline a lot of things. • You can setup streams from your existing DynamoDB table or S3 snapshots for your DynamoDB and capture all the data in your new table, ready for a cut over. • Do zero down-time migrate anything from RDS with the Database Migration Service https://aws.amazon.com/blogs/database/accelerate-data-migration-using-aws-dms-and-aws-cdk/ • Setup bucket replication to another S3 bucket created with the cdk, and copy across everything from your existing S3 buckets. Then doing some load shifting with route53 would enable a blue/green switchover. This is great example of zero down-time re-sharding for postgres https://www.notion.so/blog/sharding-postgres-at-notion I am considering writing a blog about full migration from sls to SST/cdk. Currently one monorepo with 9 serverless.yml and 6 RDS instances that I need to migrate to SST/cdk 🤷‍♂️
s
@Simon Reilly I would love to read that blog post!
j
Yeah @Simon Reilly we’d love to share this as well!
h
Thanks for the responses! Much appreciated. I'm already employing both of those strategies, I have an import from arn for the "must haves" like dynamodb but everything else i'm deploying from scratch with SST. I highly doubt I will be able to migrate the existing dynamodb without redeploying it - which sucks but hey. I might test out the bucket replication system, might be something to implement in the future as it sounds like a chunk of work that should defo go to the backlog 😛 thanks again! Would love to hear Frank's take on this and how he sees it.
Sorry to double message and revive this, is there any suggestion on how to import and use Cognito Userpool within an SST stack? tldr; I have an existing userpool that I want to import and attach a new permission to it.
j
@Frank just pulling you in for this.
t
I do just simple if/else statements when I need stateful stuff / pull in existing stuff.In tricky situations I will try to recreate a similar situation on lower env as well (e.g. have a VPC / dynamoDB created outside of SST automation - just to prove that I'm correctly integrating with existing resources - cause then the only difference on prod - is the dynamodb arn. Cherry on the cake - I use env config files to then control that: on dev envs I have abc VPC/DynamoDB on prod I have cdf VPC/DynamoDB
That way my code is shared between dev & prod, and only difference is config files
that's obviously only worth the hassle sometimes 🙂 - e.g. when you think that prod setup is tricky / prone to changes and you wanna be able to fully test it ahead of time