Per the Prisma Roadmap, are there any best practic...
# orm-help
s
Per the Prisma Roadmap, are there any best practices or solutions for migrating during CI buiild/deploy steps (https://www.notion.so/Simplify-integrating-migrate-in-CI-CD-pipelines-bf0019b446cb45c783038a056dec9903)
I'm going live with prisma and my current plan was to manually execute migration steps in higher environments (but I haven't researched much yet).
j
Im currently playing around with AWS CDK to have a migration step in our AWS Pipeline (this is AWS-centric though)
💯 1
If you found another solution, would love to hear about that
Also, maybe you might be inteersted in this thread: https://prisma.slack.com/archives/CA491RJH0/p1610579229458800
s
My initial thoughts were to add a step in codepipeline (haven't migrated to CDK yet)
As for the linked post, I added prisma generation and migration steps to my docker image. The only issue I ran into was providing the environment variable for DB connection (this appears to be an AWS issue though). Even though I supplied the environment variable for the build step, the docker container didn't appear to find it 🤷
j
Maybe share you buildspec?
🙏 1
Could possibly spot it (no guarantees)
s
I'll likely take you up on your offer if it still stands for next week. I've got a deadline and I can defer that problem until next week. Thank you for the extra set of eyes.
j
Sure thing, Im also dealing with this atm so lets see then what we can do :)
👍 1
j
If you folks have any insights into how you handle the deployments from a perspective of making sure you aren't serving requests with the old API incompatible with the new schema, I'd be interested. I assume the only options are draining old requests and holding incoming requests until the deployment is done, or somehow ensuring that it gets applied in two steps that are somehow backwards compatible
There is surprisingly little information/articles I've seen about handling migrations in a setting with load balancing
j
One advice I commonly read is indeed to write your schema such that it is always backwards compatible
But it is difficult to ensure this in practice
j
I mean, how would you do that if you rename a column? Add a view that emulates the other schema I guess?
j
I think the common strat is: add new column, write data from old to new, empty old column, then next migration delete it
j
But then how would you handle different servers with different understandings of the schema? Seems like someone would be reading/writing somewhere out of date
j
I havent built this in, I currently plan nightly builds which have downtimes, but not scalable in the long run
Indeed, I agree with you there, I suspect I will have to solve this problem soon as well, though I got no concrete answer atm
j
@Jonathan hey man do you have an example that you deploy the prisma with AWS codepipeline?
j
I can show you the cdk code we use?
j
that sounds great if you can
@Jonathan
j
Copy code
const buildStage = pipeline.addStage('build');
    buildStage.addActions(new codepipeline_actions.CodeBuildAction({
      actionName: `${props?.prefix}DockerBuild`,
      input: sourceArtifact,
      outputs: [buildArtifact],
      // The build will create an envrionment and
      // - Create a production-ready build (new tag and latest with an imagedefinitions.json for the deployment).
      // - Create a migration-container containing the latest changes and prisma.
      //      This is necssary as the codebuild is mostly private and has no access to npm.

      project: new codebuild.PipelineProject(this, `${props?.prefix}DockerBuild`, {
        environmentVariables: {
          SERVICE_NAME: { value: props?.apiService.service.taskDefinition.defaultContainer?.containerName },
          IMAGE_REPO_NAME: { value: repoName },
          MIRATE_REPO_NAME: { value: migrationRepoName },
          IMAGE_TAG: { value: 'latest' },
          AWS_ACCOUNT_ID: { value: 'YOUR_ACCOUNT_ID' },
          AWS_DEFAULT_REGION: { value: 'eu-central-1' },
        },
        environment: {
          privileged: true,
          buildImage: codebuild.LinuxBuildImage.AMAZON_LINUX_2_3
        },
        buildSpec: codebuild.BuildSpec.fromObject({
          version: '0.2',
          phases: {
            pre_build: {
              commands: [
                'echo Logging in to AWS',
                '$(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)',
                'aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$<http://AWS_DEFAULT_REGION.amazonaws.com|AWS_DEFAULT_REGION.amazonaws.com>'
              ]
            },
            build: {
              commands: [
                'echo Build started',
                'echo Building the Docker image...',
                'docker build -t $IMAGE_REPO_NAME:$CODEBUILD_RESOLVED_SOURCE_VERSION ./api',
                'docker build -t $MIRATE_REPO_NAME:latest --target migrateBuilder ./api',
                'docker tag $IMAGE_REPO_NAME:$CODEBUILD_RESOLVED_SOURCE_VERSION $AWS_ACCOUNT_ID.dkr.ecr.$<http://AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$CODEBUILD_RESOLVED_SOURCE_VERSION|AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$CODEBUILD_RESOLVED_SOURCE_VERSION>',
                'docker tag $IMAGE_REPO_NAME:$CODEBUILD_RESOLVED_SOURCE_VERSION $AWS_ACCOUNT_ID.dkr.ecr.$<http://AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:latest|AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:latest>',
                'docker tag $MIRATE_REPO_NAME:latest $AWS_ACCOUNT_ID.dkr.ecr.$<http://AWS_DEFAULT_REGION.amazonaws.com/$MIRATE_REPO_NAME:latest|AWS_DEFAULT_REGION.amazonaws.com/$MIRATE_REPO_NAME:latest>'
              ]
            },
            post_build: {
              commands: [
                'echo finishing up',
                'docker push $AWS_ACCOUNT_ID.dkr.ecr.$<http://AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$CODEBUILD_RESOLVED_SOURCE_VERSION|AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$CODEBUILD_RESOLVED_SOURCE_VERSION>',
                'docker push $AWS_ACCOUNT_ID.dkr.ecr.$<http://AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:latest|AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:latest>',
                'docker push $AWS_ACCOUNT_ID.dkr.ecr.$<http://AWS_DEFAULT_REGION.amazonaws.com/$MIRATE_REPO_NAME:latest|AWS_DEFAULT_REGION.amazonaws.com/$MIRATE_REPO_NAME:latest>',
                'echo finished pushing docker image',
                `printf '[{"name":"%s","imageUri":"%s"}]'  $SERVICE_NAME  $AWS_ACCOUNT_ID.dkr.ecr.$<http://AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$CODEBUILD_RESOLVED_SOURCE_VERSION|AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$CODEBUILD_RESOLVED_SOURCE_VERSION> > imagedefinitions.json`,
              ]
            }
          },
          artifacts: {
            files: [
              'imagedefinitions.json'
            ]
          }
        }),
        role: buildRole,
      }),
    }));
👍 1
I can't share the complete bit, so I will share as much as I can: During our Codepipeline, we have a repository where we store images containing our latest migration
Copy code
const migrationRepoName = 'migrations';
    const migrationRepo = ecr.Repository.fromRepositoryName(this, 'MigrationRepo', migrationRepoName);
Then, I do a
codebuild
step, where we build basically two images: one which is the general API container image, and one which will contain
prisma + migrations
. The latter is what we refer to as the
MIGRATE_REPO
, and is passed using CDK. By pushing to this repository, we allow a separate repo wh
💯 1
Then we do a separate migration stage, which takes place in an ISOLATED subnet (same subnet as db). We ensure that the code-build takes place without any network connection, by using the created migration-repo as base for the code-build.
Copy code
const migrateRole = new iam.Role(this, `${props?.prefix}MigrateRole`, {
      assumedBy: new iam.ServicePrincipal('<http://codebuild.amazonaws.com|codebuild.amazonaws.com>'),
    });

 const migrateBuildProject = new codebuild.PipelineProject(this, `${props?.prefix}MigrateBuild`, {
      // This build will call prisma's migration function, and output it to `output.txt` in the artifact.
      buildSpec: codebuild.BuildSpec.fromObject({
        version: '0.2',
        phases: {
          pre_build: {
            commands: [
              'cd /app'
            ]
          },
          build: {
            commands: [
              'echo $CODEBUILD_SRC_DIR > codebuildsrcdir.txt',
              'pwd > whereami.txt',
              'ls > whatsaroundme.txt',
              './node_modules/.bin/prisma migrate up --experimental > output.txt 2>&1',
            ]
          }
        },
        artifacts: {
          files: [
            '/app/codebuildsrcdir.txt',
            '/app/whereami.txt',
            '/app/whatsaroundme.txt',
            '/app/output.txt'
          ]
        }
      }),
      environment: {
        buildImage: codebuild.LinuxBuildImage.fromEcrRepository(migrationRepo),
        privileged: true,
        environmentVariables: {
          DB_STRING: {
            value: `${secret.secretName}:url`,
            type: codebuild.BuildEnvironmentVariableType.SECRETS_MANAGER
          },
        }
      },
      vpc: props?.vpc,
      subnetSelection: {
        subnetType: ec2.SubnetType.ISOLATED
      },
      role: migrateRole,
      securityGroups: props?.rdsSecurityGroup ? [props?.rdsSecurityGroup] : undefined
    });

   props?.db.grantConnect(migrateRole);

    migrateStage.addActions(new codepipeline_actions.CodeBuildAction({
      actionName: `${props?.prefix}MigrateBuild`,
      input: sourceArtifact,
      outputs: [migrateArtifact],
      project: migrateBuildProject
    }));
👍 1
This way, we can apply prisma migrations within the same subnet, using existing migrations
👍 1
j
I assume you update your API servers with the new API image after the migration completes - do you write your migrations in a way that both the old and new API code can use it during and after the migration?
j
Thanks Jonathan
j
@Jonathan Romano the API server does not use the migrations, those are only meant for the database (RDS). The API does use the latest generated prisma version, but this is aready pushed to git and is independent from our migration or api containers
j
@Jonathan Right, what I mean is that your schema will be different before and after you run migrations, so you either need to take all your servers offline while you do the migration or both the old code and the new code need to be compatible with either the new schema or the old schema if you want to do hot replacement