Sebastian Gug
08/22/2022, 9:40 PMGustavo
08/22/2022, 10:02 PMapp-cluster.config
module.exports = {
apps: [
{
name: 'migrator',
script: './migrationManager/index.js',
instances: 1,
},
{
name: 'app_1',
script: './index.js',
exec_mode: 'cluster',
instances: 'max', // leave this so that it can take advantage from all computing power available.
},
],
};
then my container simply does:
Dockerfile
CMD ['yarn', 'start:prod']
package.json
"start:prod": "pm2-runtime app-cluster.config.js"
That will start 2 processes:
⢠1 for an API
⢠1 for running migrations
The migrations are executed in another child_process running in the background using spawnSync
and after everything finished, using the pm2 API i can connect to the running instance of pm2 and stop the process, as I only need it to run migrations and nothing else...
A better way to do this tho, would be to instead of running migrations from the script, just tell the script to put the migrations in a queue as a job (could be redis or whatever), and then having a second version of your API or script, to constantly check the queue for jobs, and execute/consume them.
This second approach is better for scaling. The first one tho helps you to validate that everything is working up to that point š
But if there's a better way, I'm all ears! š