Comment by bongodongobob
Comment by bongodongobob 9 days ago
Nice, sounds like it saved you some time.
Comment by bongodongobob 9 days ago
Nice, sounds like it saved you some time.
So THAT's why large organizations want "AI".
In such a place I should be a very loud advocate of LLMs, use them to generate 100% of my output for new tasks...
... and then "improve performance" by simply fixing all the obvious inefficiencies and brag about the 400% speedups.
Hmm. Next step: instruct the "AI" to use bubblesort.
> What if I had trusted the code? It was working after all.
Then you would have been done five minutes earlier? I mean, this sort of reads like a parody of microoptimization.
Why would you blindly trust any code? Did you tell it to optimize for speed? If not, why are you surprised it didn't?
So, most low level functions that enumerate the files in a directory return a structure that contains the file data from each file. Including size. You already have it in memory.
Your brilliant AI calls another low level function to get the file size on the file name. (also did worse stuff but let's not go into details).
Do you call reading the file size from the in memory structure that you already have a speed optimization? I call it common sense.
Yep exactly, LLMs blunder over the most simple nonsense and just leaves a mess in their wake. This isn't a mistake you could make if you actually understood what the library is doing / is returning.
It's so funny how these AI bros make excuse after excuse for glaring issues rather than just accept AI doesn't actually understand what it's doing (not even considering it's faster to just write good quality code on the first try).
>Why would you blindly trust any code?
because that is what the market is trying to sell?
You "AI" enthusiasts always try to find a positive spin :)
What if I had trusted the code? It was working after all.
I'm guessing that if i asked for string manipulation code it would have done something worth posting on accidentally quadratic.