Comment by solatic

Comment by solatic 17 hours ago

0 replies

Reliable systems require hard limits imposed by designers. When systems hit the hard limits, it's a sign somebody's assumptions are wrong: either the designer built too small, or there's some bug pushing up against the hard limit. Either you catch the bug or make an intentional decision on how to scale further. This is basic engineering and is a requisite part of any undergraduate engineering degree worth its salt.

Allowing eight hundred gigabyte containers is gross incompetence. Trying to fix it by scaling the node disk from 2 TB to 2.5 TB is further evidence of incompetence. Understanding that you need to build a hard cap, but not concluding with action items to actually build one - instead just building monitoring for image size - is a clear sign to stay away.

It boggles my mind that the author could understand copy-on-write filesystem semantics but can't imagine how to engineer actual size limits on said filesystem. How is that possible?

.... oh right, the blogpost is LLM slop. So nobody knows what the author actually learned.