It's perfectly fine to not like Prolog, but I do feel that if you're going to write an article about why you don't like it, you should at least spend some time figuring it out first.
He says of the cut operator "This is necessary for optimization but can lead to invalid programs." Imagine if a programmer new to C++ said the same thing of the "break" keyword. That's how ridiculous it sounds. Yes, cut can be used to prune backtracking and eliminate unneeded work, but that's hardly it's purpose. It leads to "invalid" programs (by which I assume he means, programs that do something other than what he wants) only in cases where you are using it wrong. Cut is no more "necessary for optimization" than break is. It's a control structure that you don't understand
Negation (\+) is confusing, and the author correctly provides examples where its meaning is unintuitive when applied to unbound variables. That's because it's not strictly speaking a negation predicate, but rather a "not provable" predicate. In that light, the examples in the article make perfect sense. Yes, Prolog is a programming language, so the order of terms matter, even if the order wouldn't matter in pure logic.
Look, Prolog is a weird language. It has a learning curve. It's not "just another language" in the Java, C++, Pascal, Python mold. I get it. But this article has the flavor of an impatient newbie getting frustrated because he can't be bothered to read the documentation.
> this article has the flavor of an impatient newbie getting frustrated because he can't be bothered to read the documentation.
The author has written about Prolog in a positive light before (and linked to it in the post), I don't get the impression that these are all "the author doesn't understand what they're doing".
Their first complaint, that "strings are not standardised, so code working with strings in SWI-Prolog is not compatible with Scryer Prolog", seems an appropriate thing to be unhappy about (unless the author is just wrong?).
Your response to their gripe about \+ being "not provable" instead of "negation" notes it's a subtle difference, and that Prolog differs from pure logic there.
The author even notes that doing due diligence, they found a solution to a complaint they had. This doesn't strike me as "can't be bothered to read the documentation".
Open any Prolog programming textbook (Clocksin & Mellish, Bratko, Sterling & Shapiro, O'Keefe, anything) and the first thing you learn about Prolog is that "code logic" is expressed in facts and rules, and that Prolog predicates don't "return" anything.
The confusion only deepens after that. There are no boolean values? In an untyped language? Imagine that. true/0 and false/0 are not values? In a language where everything is a predicate? Imagine that. Complete lack of understanding that "=" is a unification operator, and that unification is not assignment, like, really, it's not, it's not just a fancy way to pretend you don't do assignment while sneaking it in by the backdoor to be all smug and laugh at the noobs who aren't in the in-group, it's unification, it doesn't work as you think it should work if you think it should work like assignment because everything is immutable so you really, really don't need assignment. Complete misunderstanding of the cut, and its real dangers, complete misunderstanding of Negation as Failure, a central concept in logic programming (including in ASP) and so on and so on and so on and on.
The author failed to do due diligence. And if they've written "in a positive light" about Prolog, I would prefer not to read it because I'll pull my remaining hair out, which is not much after reading this.
Both you and GP have had some fairly strong responses to what looked like mild complaints, the kind I would expect anyone to have with a language they've used enough to find edges to.
The original example in the last section was this:
foo(A, B) :-
\+ (A = B),
A = 1,
B = 2.
foo(1, 2) returns true, so you'd expect f(A, B) to return A=1, B=2. But it returns false.
foo(A,B) fails because \+(A = B) fails, because A = B succeeds. That's because = is not an assignment but a unification, and in the query foo(A,B), A and B are variables, so they always unify.In fact here I'm not sure whether the author expects = to work as an assignment or an equality. In \+(A = B) they seem to expect it to work as an equality, but in A = 1, B = 2, they seem to expect it to work as an assignment. It is neither.
I appreciate unification is confusing and takes effort to get one's head around, but note that the author is selling a book titled LOGIC FOR PROGR∀MMERS (in small caps) so they should really try to understand what the damn heck this logic programming stuff is all about. The book is $30.
> This is also why you can't just write A = B+1: that unifies A with the compound term +(B, 1)
Honestly, we don't have to wrap everyone on the internets in a fuzzy warm cocoon of acceptance. Sometimes people talk bullshit. If they're open to learn, that's great, but the author is snarkily dismissing any criticism, so they can stew in their ignorance as far as I am concerned.
Like the OP says, the author didn't bother to RTFM before griping.
I, too, am a big fan of prolog and have (at least) yearly binges where I write a lot of it for fun and profit (and some frustration), but I do not consider myself to be an expert. But even I can see that the author has what I would consider a pretty basic understanding of prolog. Which makes it even more surprising they are writing a book that uses prolog.
> "This is necessary for optimization but can lead to invalid programs."
Is this not the case? It feels right in my head, but I assume I'm missing something.
My understanding: - Backtracking gets used to find other possible solutions - Cut stops backtracking early which means you might miss valid solutions - Cut is often useful to prune search branches you know are a waste of time but Prolog doesn't - But if you're wrong you might cut a branch with solutions you would have wanted and if Prolog iterates all other solutions then I guess you could say it's provided an invalid solution/program?
Again, please be gentle. This sounded reasonable to me and I'm trying to understand why it wouldn't be. It's totally possible that it feels reasonable because it might be a common misconception I've seen other places. My understanding of how Prolog actually works under-the-hood is very patchy.
That's right, but missing valid solutions doesn't mean that your program is "invalid", whatever that means. The author doesn't say.
Cuts are difficult and dangerous. The danger is that they make your program behave in unexpected ways. Then again, Prolor programs behave in unexpected ways even without the cut, and once you understand why, you can use the cut to make them behave.
In my experience, when one begins to program in Prolog, they pepper their code with cuts to try and stop unwanted bactracking, which can often be avoided by understanding why Prolog is backtracking in the first place. But that's a hard thing to get one's head around, so everyone who starts out makes a mess of their code with the cut.
There are very legitimate and safe ways to use cuts. Prolog textbooks sometimes introduce a terminology of "red" and "green" cuts. Red cuts change the set of answers found by a query, green cuts don't. And that, in itself, is already hard enough to get one's head around.
At first, don't use the cut, until you know what you're doing, is I think the best advice to give to beginner Prolog programmers. And to advanced ones sometimes. I've seen things...
This gets to the heart of my problem with Prolog: it's sold as if it's logic programming - just write your first-order predicate logic and we'll solve it. But then to actually use it you have to understand how it's executed - "understanding why Prolog is backtracking in the first place".
At that point, I would just prefer a regular imperative programming language, where understanding how it's executed is really straightforward, combined with some nice unification library and maybe a backtracking library that I can use explicitly when they are the appropriate tools.
Prolog is a logic-flavored programming language. I don't recall Prolog ever being "sold" as pure logic. More likely, an uninformed person simply assumed that Prolog used pure logic.
Complaining that Prolog logic doesn't match mathematical logic is like complaining that C++ objects don't accurately model real-life objects.
I don't recall Prolog ever being "sold" as pure logic.
One of the guides linked above describes it as: The core of Prolog is restricted to a Turing complete subset of first-order predicate logic called Horn clausesDoes this sound to you like an attempt to deceive the reader into believing, as the GP comment stated, that the user can
> just write your first-order predicate logic and we'll solve it.
No it does not. Please read the words that you are citing, not the words that you imagine. I honestly can't tell if you are unable to parse that sentence or if you a cynically lying about your interpretation in order to "win" an internet argument.
All programming languages are restricted, at least, to a "Turing complete subset of first-order predicate logic." There is absolutely no implication or suggestion of automatically solving any, much less most, first order logic queries.
Prolog isn't "sold" as a logic programming language. It is a logic programming language. Like, what else is it?
I have to be honest and say I've heard this criticism before and it's just letting the perfect be the enemy of the good. The criticism is really that Prolog is not a 100% purely declarative language with 100% the same syntax and semantics as First Order Logic.
Well, it isn't, but if it was, it would be unusable. That would make the critics very happy, or at least the kind of critics that don't want anyone else to have cool stuff, but in the current timeline we just have a programming language that defines the logic programming paradigm, so it makes no sense to say it isn't a logic programming language.
Edit:
>> At that point, I would just prefer a regular imperative programming language, where understanding how it's executed is really straightforward, combined with some nice unification library and maybe a backtracking library that I can use explicitly when they are the appropriate tools.
Yeah, see what I mean? Let's just use Python, or Java, or C++ instead, which has 0% of FOL syntax and semantics and is 0% declarative (or maybe 10% in the case of C++ templates). Because we can't make do with 99% logic-based and declarative, gosh no. Better have no alternative than have a less than absolutely idealised perfect ivory tower alternative.
Btw, Prolog's value is its SLD-Resolution based interpretation. Backtracking is an implementation detail. If you need backtracking use yield or whatever other keyword your favourite imperative language gives you. As to unification, good luck with a "nice unification library" for other languages. Most programmers can't even get their head around regexes. And good luck convincing functional programmers that "two-way pattern matching" (i.e. unification) is less deadly than the Bubonic Plague.
Ohhh, interesting. So a green cut is basically what I described as cutting branches you know are a waste of time, and red cuts are the ones where you're wrong and cut real solutions?
> At first, don't use the cut, until you know what you're doing, is I think the best advice to give to beginner Prolog programmers. And to advanced ones sometimes. I've seen things...
Yeah, I'm wondering how much of this is almost social or use-case in nature?
E.g., I'm experimenting with Prolog strictly as a logic language and I experiment with (at a really novice level) things like program synthesis or model-to-model transformations to emulate macro systems that flow kind of how JetBrains MPS handles similar things. I'm basically just trying to bend and flex bidirectional pure relations (I'm probably conflating fp terms here) because it's just sort of fun to me, yeah?
So cut _feels_ like something I'd only use if I were optimizing and largely just as something I'd never use because for my specific goals, it'd be kind of antithetical--and also I'm not an expert so it scares me. Basically I'm using it strictly because of the logic angle, and cut doesn't feel like a bad thing, but it feels like something I wouldn't use unless I created a situation where I needed it to get solutions faster or something--again, naively anyway.
Whereas if I were using Prolog as a daily GP language to actually get stuff done, which I know it's capable of, it makes a lot of sense to me to see cut and `break` as similar constructs for breaking out of a branch of computation that you know doesn't actually go anywhere?
I'm mostly spit-balling here and could be off base. Very much appreciate the response, either way.
Basically, yes, except it's not necessarily "wrong", just dangerous because it's tempting to use it when you don't really understand what answers you're cutting. So you may end up cutting answers you'd like to see after all. The "red" is supposed to signify danger. Think of it as red stripes, like.
Which make stuff go faster too (well, a little bit). So, yeah, cuts in general help the compiler/interpreter optimise code execution. I however use it much more for its ability to help me control my program. Prolog makes many concessions to efficiency and usability, and the upshot of this is you need to be aware of its idiosyncrasies, the cut being just one of them.
>> Whereas if I were using Prolog as a daily GP language to actually get stuff done, which I know it's capable of, it makes a lot of sense to me to see cut and `break` as similar constructs for breaking out of a branch of computation that you know doesn't actually go anywhere?
Cuts work like breaks sometimes, but not always. To give a clear example of where I always use cuts, there's a skeleton you use when you want to process the elements of a list that looks like this:
list_processing([], Bind, Bind):-
!. % <-- Easiest way to not backtrack once the list is empty.
list_processing([X|Xs], ..., Acc, Bind, ... ):-
condition(X)
,! % Easiest way to not fall over to the last clause.
,process_a(X,Y)
,list_processing(Xs, ..., [Y|Acc], Bind, ... ).
list_processing([X|Xs], ..., Acc, Bind, ... ):-
process_b(X,Y)
,list_processing(Xs, ..., [Y|Acc], Bind, ... ).
So, the first cut is a green cut because it doesn't change the set of answers your program will find, because once the list in the first argument is empty, it's empty, there's no more to process. However, Prolog will leave two choice points behind, for each of the other two clauses, because it can't know what you're trying to do, so it can't just stop because it found an empty list.The second cut is technically a red cut: you'd get more answers if you allowed both process_a and process_b to modify your list's elements, but the point is you don't want that, so you cut as soon as you know you only want process_a. So this is forcing a path down one branch of search, not quite like a break (nor a continue).
You could also get the same behaviour without a cut, by e.g. having a negated condition(X) check in the last clause and also checking that the list is not empty in every other clause (most compilers are smart enough to know that means no more choice points are needed), but, why? All you gain this way is theoretical purity, and more verbose code. I prefer to just cut there and get it done. Others of course disagree.
Fair enough. I believe this is a variation of Cunningham's Law, which states "the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer."
Everything you wrote about backtracking is completely correct. If I may paraphrase, it boils down to: cut can be used to avoid executing unnecessary code, but using it the wrong place will avoid executing necessary code, and that would be bad. My point is: the same could be said about the "break" keyword in C++: it can avoid unnecessary iterations in a loop, or it can exit a loop prematurely. Cut and break are both control structures which make sense in the context of their respective languages, but neither would be accurately described as "for optimization."
Picat (mentioned by the author) Datalog Mercury XSB
are there more?
I think Curry is an interesting take on logic programming. A sort of Haskell meets Prolog.
Bidirectionality has always been super fascinating.
Didn’t know about Picat. 100% going to check it out.
I’m mostly a language tourist they likes kicking the tires on modes of modeling problems that feel different to my brain.
Started skimming those notes. Really solid info. Appreciate it!
thisWouldBeGreat: !, fail.
| ?- findall(A, (tree(A, N), branch(N)), As).
As = [n,n1]
yes
See https://lpn.swi-prolog.org/lpnpage.php?pagetype=html&pageid=...