I once asked this question, but I would be curious...
# orm-help
j
I once asked this question, but I would be curious to ask it again. In current CI/CD workflows (for example, deploying a docker container to AWS), I am always curious where in their process people include the actual migrations of the table. • Is this in the same docker image as where you serve the API from? The caveat is that you would have to include prisma/cli as dependency in your docker image only for migration-time (which might make the image much heavier)? • Do you use a separate image purely for the migration? And if so, when do you deploy this image?
r
Hey @Jonathan 👋 Do you perform your migrations currently in a separate workflow before deployment or in the same workflow?
j
Hey Ryan! We currently have a bash entrypoint script (
CMD bash docker-entrypoint.sh
) defined for our Docker image, which applies the migrations in the same workflow (right after pulling the latest docker-image). This has two problems however: • If migration fails for some reason, the entire script fails. => I suppose it could be fixed with a try-catch in the bash, but I suspect there are better ways to decouple the running of a component from the migrations. • We would need to bundle
@prisma/cli
only to be able to call the migration tool in our Dockerfile, doubling the docker-image size.
d
We use code pipeline as that is the only way to get access inside the vpc to run the migrations. We run any integration tests, build the container and then build a separate container that just runs the migrations, if everything is successful, then we deploy the build container to ECS. Each step builds its own container and is then discarded or deployed. This may not be the best practice, but its pretty reliable for us.
Messing around using Digital Ocean app platform, I just tend to run the migration in the container before starting the server as it has already passed integration tests etc so it shouldn't fail at that point
j
Hi Dominic, I was curious to Codepipeline as well (I was looking up whether I should apply this in Github Actions / CodePipeline of some sort). So you have a single container (Dockerfile) defined just for the deployments?
and then in one of the steps (after testing and such), you run this migration container, and if that is correct, you push the container to ECS?
d
yes
So to access this in Github actions, you would have to make your database publicly accessible which isnt great but it makes things a lot easier. You could definitely run all your tests, build the container and then migrate and push the container after everything has passed. Depends how long your container takes to deploy
For blue green deployments, its good practice to ensure your migrations are backwards compatible for at least one version so that you can rollback without any issues in the database
There are issues where this isnt possible though so I can understand the questions
so here is the test yml
Copy code
version: 0.2

phases:
  install:
    runtime-versions:
      nodejs: 12
    commands: 
      - yarn
      - yarn build
  build:
    commands:
      - yarn ci
      - yarn test:integration
here is the migration yml
Copy code
version: 0.2

phases:
  install:
    runtime-versions:
      nodejs: 12
    commands: 
      - yarn
      - yarn build
  build:
    commands:
      - yarn prisma migrate deploy --preview-feature
nice and simple
j
Ah yes, you do make a point there. I would want to keep my database private inside the private subnet of the VPC, so perhaps github actions is not really the best way of going
d
and each step fails then it stops the deployment at that point
yeah
so we use github actions to perform any code based automation (deploy maps to sentry, generate release, release notes, release emails)
j
though for B/G deployments, since prisma does not technically supports migration rollback, I interpret that for you as writing migrations such that if it exists it does not break the API
d
correct
j
Haha, that's what I was hoping github actions to use with as well
Amazing, this gives me some extra food for thought
d
no problem
I wish there was better network peering solutions for ci stuff but there doesnt seem to be right now
j
I really appreciate you sharing your workflow, thanks a lot man 🙂 I will no doubt feel a bit more confident thinking about this moving forward
d
no worries, since we are a startup, our initial one was just deploying through actions and running the migrations manually
until we could pay off that tech debt
r
I agree with @Dominic Hadfield here and the workflow stated with Codepipeline. If you’re using GitHub Actions you can also initialize the database in your pipeline, run the tests and then perform the migrations on the actual one if publicly accessible. So it would look something like this.
❤️ 2
d
Yeah, when we were performing integration test using github actions, we had an integration test database that was exposed and we would create and drop all tables to do the testing so it didnt matter that it was publicly accessible
❤️ 1
💯 1