Comment by dr_dshiv
Good old fashioned AI, amirite
Good old fashioned AI, amirite
I’ve never liked that term “sub-symbolic”. It implies that there is something at a deeper level than what a Turing machine can compute (i.e., via the manipulation of strings of symbols), and as far as we can tell, there’s no evidence for that. It might be true, but even a quantum computer can be simulated on a classical computer. And of course neural networks run on classic computers too.
Yeah, I know that’s not what “symbol” is really referring to here in this context but I just don’t like what the semantics of the word suggests about neural networks — that they are somehow a halting oracle or hypercomputation — which they’re obviously not.
Oh for sure! Wasn’t critiquing your comment at all. I’ve seen the term a lot lately and it just made me wonder how much the industry is using it as a misleading hype factor. E.g., LLMs are “better” than Turing machines because they are operating at a level “below” Turing machines even though the comparison doesn’t make sense, as symbolic computation isn’t referring to the symbol-manipulating nature of Turing machines in the first place.
Well, to the extent that people equate GOFAI with purely symbolic / logic-based processing, then no, not for my money anyway. I think it's possible to construct systems that use elements of symbolic processing along with sub-symbolic approaches and get useful results. I think of it as (although this is something of an over-simplification) taking symbolic reasoning, relaxing some of the constraints that go along with the guarantees that method makes out the outputs, and accepting a (hopefully only slightly) less desirable output. OR, think about flipping the whole thing around, get an output from, say, an LLM where there might be hallucination(s), and then use a symbolic reasoning system to post-process the output to ensure veracity before sending it to the user. Amazon has done some work along those lines, for example. https://aws.amazon.com/blogs/machine-learning/reducing-hallu...
Anyway this is all somewhat speculative, and I don't want to overstate the "weight" of anything I seem to be claiming here. This is just the direction my interests and inclinations have taken me in.