I use something similar to the first option.
Instead of using managed Postgres though, I use the docker-compose file that's provided in the main supabase repo. This includes postgres and all the other parts required to run the stack (e.g. REST, realtime, auth, etc). This has 2 main advantages:
- Lower cost. You're just paying for a standard VPS rather than a VPS + managed DB.
- Easier deployment. Since you're essentially spinning up a set of docker images, you have the full power of the Docker ecosystem available (e.g. using Portainer for management). DO already have a premade Docker image available to deploy too, so you save a huge amount of time and stress.
I believe the Supabase Postgres fork includes some things not included with standard Postgres, so it's possible that DO managed DB wouldn't work entirely with the other parts of the Supabase stack. I could be wrong about this so don't take my word for it.
In regards to the API, I'd personally run that as a docker container too. PM2 is good, but I always found deployments a pain to deal with compared to having Watchtower just pull a new image when it's available and instantly restart the container with the new image.
Running an API directly on a VPS means you've got to have the env installed (e.g. node, ruby, php, etc).
For example, package updates sometimes want me to use a newer node version. If I was to deploy directly on the VPS, I'd have to go through the process of installing the newer node version, then redeploy.
By putting it inside Docker, you only need docker installed, and changing something like a node version is just a case of changing a single line in your dockerfile. Your CI (e.g. github actions) will handle pulling the correct node version, so any issues will generally show up before the deploy happens.