any advice on how to run the migrations on the pro...
# orm-help
s
any advice on how to run the migrations on the prod database when it's inside a VPC in GCP? I'd really really really like to avoid connecting to the VPC from GH actions to run that, there must be a better way, but upon google searching not a lot of 'meaty answers' have popped up so thought I'd ask here
g
I am/was facing the same issue, but i'm in AWS with ECS Fargate. I'm using nodejs as backend, so what i'm doing at the moment is using a process manager (pm2), that has this config:
app-cluster.config
Copy code
module.exports = {
    apps: [
        {
            name: 'migrator',
            script: './migrationManager/index.js',
            instances: 1,
        },
        {
            name: 'app_1',
            script: './index.js',
            exec_mode: 'cluster',
            instances: 'max', // leave this so that it can take advantage from all computing power available.
        },
    ],
};
then my container simply does:
Dockerfile
Copy code
CMD ['yarn', 'start:prod']
package.json
Copy code
"start:prod": "pm2-runtime app-cluster.config.js"
That will start 2 processes: • 1 for an API • 1 for running migrations The migrations are executed in another child_process running in the background using
spawnSync
and after everything finished, using the pm2 API i can connect to the running instance of pm2 and stop the process, as I only need it to run migrations and nothing else... A better way to do this tho, would be to instead of running migrations from the script, just tell the script to put the migrations in a queue as a job (could be redis or whatever), and then having a second version of your API or script, to constantly check the queue for jobs, and execute/consume them. This second approach is better for scaling. The first one tho helps you to validate that everything is working up to that point šŸ˜› But if there's a better way, I'm all ears! šŸ™‚