IhateAI 3 days ago

Why would someone want to spend half a million dollars on GPUs and components (if not more) to run one year old models that genuinely aren't useful? You can't self host trillion parameter models unless you own a datacenter lol (or want to just light money on fire).

  • tom1337 3 days ago

    Are the mini / omni models really trillion parameter models?

    • IhateAI 3 days ago

      I don't think so, but you're still looking at a giant investment that can't really be justified for their capability.

WorldPeas 3 days ago

They'd only do that if they were some kind of open ai company /s

  • tgtweak 3 days ago

    gpt-oss is pretty great tbh - one of the better all-around local models for knowledge and grounding.

    • ComputerGuru 2 days ago

      Everyone keeps saying that but I’ve found it to be incredibly weak in the real world every single time I’ve reached for it. I think it’s benchmaxxed to an extent.