Comment by extragalaxial
Comment by extragalaxial 6 days ago
[flagged]
Comment by extragalaxial 6 days ago
[flagged]
It's getting down-voted because it is a very bad advice, one that can be refuted by already known facts. Your comment is even worse in this regards and is very misleading - the LLMs are definitely not going to "accurately explain everything you need to know", it's not a magical tool that "knows everything", it's a statistical parrot which infers the most likely sequence of tokens, which results in inaccurate responses often enough. There is already a lot of incompetent folks relying blindly on these un-reliable tools, please do not introduce more AI-slop based thinking into the world ;)
You left out the "for common algorithms like this" part of my comment. None of what you said applies to learning simple, well-established algorithms for software development. If it's history, biology, economics etc. then sure, be wary of LLM inaccuracies, but an algorithm is not something you can get wrong.
I don't personally know much about DHTs so I'll just use sorting as an example:
If an LLM exlains how a sorting algorithm works, and it explains why it fulfills certain properties about time complexity, stability, parallelizability etc. and backs those claims up with example code and mathematical derivations, then you can verify that you understand it by working through the logic yourself and implementing the code. If the LLM made a mistake in its explanation, then you won't be able to understand it because it's can't possibly make sense; the logic won't work out.
Also please don't perpetuate the statistical parrot interpretation of LLMs, that's not how they really work.
I meant it also for the (unwittingly) left-out part of your comment. Firstly, by saying this parrot will explain "everything that you need to know ..." you're pushing your own standards onto everyone else. Maybe the OP really wants to understand it deeply and learn about edge cases, and understand how it really works. I dont think I would rely on a statistical parrot (yes, that's really how they work, only on a large scale) to teach me stuff like that. At best, they are to be used with railguards as some kind of a personal version of "rain man", with the exception that the "rain man" was not hallucinating when counting cards :)
> Also please don't perpetuate the statistical parrot interpretation of LLMs, that's not how they really work.
I'm pretty sure that's exactly how they work.
Depending on the quality of the LLM and the complexity of the thing your asking about good luck fact checking it's output. It is about the same effort as finding direct sources and verified documentation or resources written by humans.
LLMs generate human like answers by using statistics and other techniques on a huge corpus. They do hallucinate but what is less obvious is that a "correct" LLM output is still a hallucination. It just happens to be a slightly useful hallucination that isn't full of BS.
As the LLM takes in inconsistent input and always outputs inconsistent output you * will * have to fact check everything it says. Making it useless for automated reasoning or explanations and a shiny turd in most respects.
The useful things LLMs are reported to do where an emergent effect found by accident by natural language engineers trying to build chat bots. LLM's are not sentient and have no idea if the output is good or bad.
Exactly this. The thing which irritates and worries me, is that I notice a lot of junior folks tend to try and apply these machines in solving open-ended problems the machines don't have the context for. The lawsuits with made-up referent cases are just the beginning I am afraid, we're in for a lot more slop endangering our services and tools.
Please, please avoid recommending LLMs for problems where the user cannot reliably verify it's outputs. These tools are still not reliable (and given how they work, they may never be 100% reliable). It's likely the OP could get a "summary" which contains hallucinations or incorrect statements. It's one thing when experienced developers use Copilot or similar to avoid writing boilerplate and boring parts of the code - they still have competence to review, control and adapt the outputs. But for someone looking to get introduced to a hard topic, such as the OP, it's a very bad advice as they have no means of checking the output for correctness. A lot of us already have to deal with junior folks spitting out the AI slop on a daily basis, probably using the tools they way you suggested. Please don't introduce more of AI slop nonsense into the world.