Comment by YeGoblynQueenne
Comment by YeGoblynQueenne a day ago
The wall of text is fine, but I am not convinced. If you really have a real-life, practical problem with cuts, then your comment fails to articulate it, because I didn't understand what it is meant to be. If it is a matter of aesthetics - higher order languages look prettier and we could, in theory (but not in practice) parallelise them - then I don't care about that. Aesthetics has no place in pragmatic considerations like the amount of resources you really need, to be able to use a language. Case in point:
>> We don't have a good strategy for automating this in a compiler yet, but the capability is there
And it's been "there" since the 1960s, but it's still not here. Looks pretty on paper, doesn't work on metal. Close, but no cigar.
>> They both exist because a Prolog implementation is intimately tied to a specific search strategy.
That is not right. There are different execution stategies for Prolog, other than DFS. In fact, Datalog's bottom-up evaluation with a TP-operator, is one of those: you can apply it to Polog programs just fine, except without the nice termination guarantees; but then, Church-Turing thesis.
Another execution strategy is SLG-Resolution with tabulation, a.k.a. tabling, which I mentioned in my other comment. I copy the link to SWI-Prolog's implementation here again:
https://www.swi-prolog.org/pldoc/man?section=tabling
Tablng is as old as nails in the Prolog community [1]. Besides which there is a ton of computer science knowledge you can bring to bear on the problem of avoiding non-termination, or unnecessary backtacking. If that is, indeed, the problem. Like I say, I don't understand from your comment what is your real-life, real-world, practical problem with using cuts.
>> Prolog didn't relinquish enough control. We can't express certain problems without resorting to weird dances to appease the poor search algorithm, same as how I might have to contort the way I write a C program so the compiler can vectorize the code.
I don't understand what that means. Prolog is FOL running on a digital computer. One is a computational paradigm, the other is a computational device. There is a mismatch between the two, unfortunately and inevitably, because the difference between theory and practice etc etc, and there are concessions that had to be made to be able to run one on top of the other. Messrs Kowalksi and Colmerauer (the man whose name not even French Computer Scientists can pronounce- true story) alighted on a set of choices that had the advantage of being the most practical possible, for the purpose of having a real language, that one can write real code in, to run on real computers; unlike, say, Haskell.
And you can do a lot more than just "write code". Prolog, because its interpreter is an implementation of SLD-Resolution, is not just a programming language but a sound and (refutation-) complete deductive inference system for FOL. And with recent advances in Inductive Logic Programming, it turns out it's also a sound and refutation complete inductive inference system, too.
Another couple of points:
DFS is not a poor choice of a search algorithm. It's an efficient one. You say ASP is "massively more efficient" than Prolog. It's not. ASP interpreters must first solve an NP-complete problem, which they only do thanks to powerful, special-purpose solvers. Those solvers are fast [2], but efficiency is a whole oher ball game (we don't have efficient algorithms for NP-complete problems). The NP-complete problem is that, because of ASP's bottom-up interpretation, the entire Herbrand base of a predicate must be fully ground. That's why ASP also has no functions [3]; so, no lists. The big advantage of Resolution, its One Neat Trick as an algorithm, is exactly that it doesn't have to ground the entire Herbrand base before finding a proof, if one exists. That's thanks to unification. More to the point, as Resolution proceeds, it prunes the search space of proofs, just by discarding branches starting at goals that fail unification. ASP can't do that [4,5].
See, every time anyone has tried to "fix" Prolog's not-fully-100%-declarativeness, they've just broken it.
Or, as I like to say about logic programming in general; sound, complete, efficient: choose two.
_______________
[1] OLD-Resolution with Tabulation; Tamaki and Sato, 1986 https://dl.acm.org/doi/10.5555/12069.12075
[2] And cool.
[3] Because functions turn a predicate's Herbrand base infinite, and then it takes an infinite amount of time to ground it.
[4] Well, ish. Standard, bottom-up ASP can't but there are different ways to evaluate ASP ... programs, too. For instance, you can evaluate them top-down, with unification, just like in Prolog. See sCASP (first link I could find; dig deeper):
http://platon.etsii.urjc.es/~jarias/slides/gde21-tutorial.pd...
[5] It's not just unification pruning the search space of proofs, either. It's that the cardinality of the resolution closure of a set of Horn clauses decreases monotonically until either the empty clause is constructed by resolution, or... well, infinity.
Sure, I like to think of the more philosophical aspects but I understand they're not something people necessarily care about. If you prefer just the practical side of things then that's a lot easier to explain: I didn't like working with Prolog.
So what follows is purely anecdotal of course, but it can't be anything else really. At my university we were exposed to all the 4 major paradigms: Procedural (C), OOP (Java), Functional (Haskell), Logic (Prolog).
When I first got exposed to Prolog, I was amazed. "Wow, you can just write the rules and it figures out the solution!". That didn't last long. It was slow and clunky, I had to be careful how I wrote the rules, and debugging was an exercise in frustration. Sometimes it ran forever, sometimes it gave repeated answers, sometimes it gave all the right answers and then ran forever.
Cuts were introduced as the solution to the performance issues, and to me they were witchcraft more than they were engineering. Something you threw around at random in the hopes of making it work. I tried to go all in with it, make the whole class project as much as possible in Prolog (it was a Java/Prolog mix).
My professor actually ended up giving me a bad grade because that's not how he wanted Prolog to have been used. He actually wanted us to use it more like one might use Datalog (with good reason, but Datalog was never explicitly mentioned).
At the time I didn't get it, the whole paradigm just felt like a complete joke to me. Something researchers liked to play with but unfit for any real work. That was also the perception of the vast majority of my colleagues.
Same was true of functional programming by the way, though to a lesser extent. I (and others) could definitely see the benefits of referential transparency, higher order functions and lazy evaluation. But I'd just apply them in whatever language I was already using. Same as how if I wanted to do search, I'd just write a depth first search algorithm myself, no reason to bother with Prolog.
All examples were either trivial nonsense, ridiculous contortions of the paradigm (a window manager? really?), or "irrelevant" research (research about Prolog using Prolog to do more Prolog).
But the problem is that my perception of Prolog extended to my perception of logic programming in general. I saw them as one and the same. The "best" and "most famous" logic programming language is jank, so the whole paradigm must be jank.
Only after I was later exposed to SMT solvers did it finally "click" that different programming paradigms can be truly transformative. That they can provide truly massive productivity boosts. Z3 added a Datalog mode at some point, which was when I realized my perception of the logic paradigm was clouded.
From Z3 Datalog I moved on to Souffle Datalog and Datomic (that syntax though, ugh), where I got to experience how poor SQL truly is. I already knew it was a poor showing of the relational paradigm, but I didn't realize the relational paradigm itself was so much weaker than the real deal: logic programming.
So that convinced me logic programming had value. Real, undeniable, practical value, not "parlor tricks". But that was still "the non-Turing-complete fragment of logic programming is useful", not "logic programming is useful". Because full logic programming still meant "Prolog" to me, the slow, janky language I used in university.
But then I got to experience Mercury and Curry. Full, Turing-complete logic programming languages with massively better performance than Prolog, that don't need cuts to perform well. Mercury has its determinism modes which make debugging the behavior of rules so much easier (much as how static typing in general makes everything easier to debug).
Curry's "needed narrowing" semantics also means if there is a solution, it will find it. This is true even though Curry is also a Turing-complete logic programming language. Prolog can get stuck in a branch of the search tree that cannot lead to a solution.
Mercury and Curry are both strong evidence to me that "every time anyone has tried to fix Prolog's not-fully-100%-declarativeness, they've just broken it." is not really accurate.
Can you give me a serious example that would not be expressible in either? Another commenter mentioned meta-interpreters but while they're incredibly cool, I'd still classify them more as a great research tool than a practical one. How often do you need one?