Comment by rockemsockem
Comment by rockemsockem 2 days ago
What are you talking about.
When people talk about scaling requirements they are not referring to minutiae like "this function needs X CPU per request and this other function needs Y CPU per request", they are talking about whether particular endpoints are primarily constrained by different things (i.e. CPU vs memory vs disk). This is important because if I need to scale up machines for one endpoint that requires X CPU but the same service has another endpoint requiring Y memory whereas my original service only needs Z memory and Y is significantly larger than Z then suddenly you have to pay a bunch of extra money to scale up your CPU-bound endpoint because you are co-hosting it with a memory-bound endpoint.
If all your endpoints just do some different logic and all hit a few redis endpoints, run a few postgres queries, and assemble results then keep them all together!!!
EDIT: my original post even included the phrase "significantly different" to describe when you should split a service!!! It's like you decided to have an argument with someone else you've met in your life, but directed it at me