There's a saying that you should first focus on providing a product that might bring in a million users, and then scale afterwards. Bring in the users by building a great product, and then focus on scalability when you need to. You can often make an otherwise slow product appear much faster through clever use of animations, and interesting loading indicators.
That's not to say you don't invest in sufficient architecture at the start, but that you don't worry too much about scalability before it becomes necessary.
50-100K/s is a lot for a single server to handle, I'd argue too much, and I'd almost guarantee you'd need replicas and load balancing. This page (
https://severalnines.com/resources/database-management-tutorials/postgresql-load-balancing-haproxy) mentions HAProxy to handle that, while
https://www.postgresql.org/docs/current/high-availability.html has some in-depth resources for it.
It also depends on the load, 50K `SELECT`s with only a single table are going to be easier to handle than 50K `SELECT`s where you have
JOIN
,
WHERE
and other filters in place - in the same way that returning
hello world
is quicker than returning computed or calculated responses. Creating partitions and indexes definitely helps in those cases, and Postgres is smart enough to cache common queries so it knows exactly where to look for data in subsequent requests that are using the same query.