Comment by kgeist
Sure, PHP can process logs of any volume, but it would require 5–10 times more servers to handle the same workload as something like Go. Not to say Go just works out of the box while for PHP you must set up all those additional daemons you listed and make sure they work -- more machinery to maintain, and usually they have quite a lot of footguns, too. Like, recently our website went down with just 60 RPS because of a bad interaction between PHP-FPM (and its max worker count settings) and Symfony's session file locks. For Go on a similar machine 60 RPS is nothing, but PHP can already barely process it, unless you're a guru of process manager settings.
In a different PHP project, we have a bunch of background jobs which process large amounts of data, and they routinely go OOM because PHP stores data in a very inefficient way compared to Go. In Go, it's trivial to load hundreds of thousands objects into memory to quickly process them, but PHP already starts falling apart before we hit 100k. So we have to have smaller batches (= make more API calls), and the processing itself is much slower as well. And you can't easily parallelize without lots of complex tricks or additional daemons (which you need to set up and maintain). It's just more effort, more waste of time and more RAM/CPU for no particular gain.
> In Go, it's trivial to load tens of objects into memory to quickly process them, but PHP already starts falling apart before we hit 100k.
I'm not going to argue that PHP is _better_ than Go. Just starting off with that.
But if your background jobs are going OOM when processing large amounts of data it's likely that there's better ways to do what you're trying to do. It is true that it's easy to be lazy with memory/resources with PHP due to the assumption that it'll be used in a throwaway fashion (serve request -> die -> serve request -> die) - but it's also perfectly capable of long-running/daemonized processes that aren't memory issues rather trivially.