> "4. Prolog is good at solving reasoning problems."
Plain Prolog's way of solving reasoning problems is effectively:
for person in [martha, brian, sarah, tyrone]:
if timmy.parent == person:
print "solved!"
You hard code some options, write a logical condition with placeholders, and Prolog brute-forces every option in every placeholder. It doesn't do reasoning.
Arguably it lets a human express reasoning problems better than other languages by letting you write high level code in a declarative way, instead of allocating memory and choosing data types and initializing linked lists and so on, so you can focus on the reasoning, but that is no benefit to an LLM which can output any language as easily as any other. And that might have been nice compared to Pascal in 1975, it's not so different to modern garbage collected high level scripting languages. Arguably Python or JavaScript will benefit an LLM most because there are so many training examples inside it, compared to almost any other langauge.
>> You hard code some options, write a logical condition with placeholders, and Prolog brute-forces every option in every placeholder. It doesn't do reasoning.
SLD-Resolution with unification (Prolog's automated theorem proving algorithm) is the polar opposite of brute force: as the proof proceeds, the cardinality of the set of possible answers [1] decreases monotonically. Unification itself is nothing but a dirty hack to avoid having to ground the Herbrand base of a predicate before completing a proof; which is basically going from an NP-complete problem to a linear-time one (on average).
Besides which I find it very difficult to see how a language with an automated theorem prover for an interpreter "doesn't do reasoning". If automated theorem proving is not reasoning, what is?
> "as the proof proceeds, the cardinality of the set of possible answers [1] decreases"
In the sense that it cuts off part of the search tree where answers cannot be found?
member(X, [1,2,3,4]),
X > 5,
slow_computation(X, 0.001).
will never do the slow_computation - but if it did, it would come up with the same result. How is that the polar opposite of brute force, rather than an optimization of brute-force?
If a language has tail call optimization then it can handle deeper recursive calls with less memory. Without TCO it would do the same thing and get the same result but using more memory, assuming it had enough memory. TCO and non-TCO aren't polar opposites, they are almost the same.
Rather, in the sense that during a Resolution-refutation proof, every time a new Resolution step is taken, the number of possible subsequent Resolution steps either gets smaller or stays the same (i.e. "decreases monotonically"). That's how we know for sure that if the proof is decidable there comes a point at which no more Resolution steps are left, and either the empty clause is all that remains, or some non-empty clause remains that cannot be reduced further by Resolution.
So basically Resolution gets rid of more and more irrelevant ...stuff as it goes. That's what I mean that it's "the polar opposite of brute force". Because it's actually pretty smart and it avoids doing the dumb thing of having to process all the things all the time before it can reach a conclusion.
Note that this is the case for Resolution, in the general sense, not just SLD-Resolution, so it does not depend on any particular search strategy.
I believe SLD-Resolution specifically (which is the kind of Resolution used in Prolog) goes much faster, first because it's "[L]inear" (i.e. in any Resolution step one clause must be one of the resolvents of the last step) and second because it's restricted to [D]efinite clauses and, as a result, there is only one resolvent at each new step and it's a single Horn goal so the search (of the SLD-Tree) branches in constant time.
Refs:
J. Alan Robinson, "A computer-oriented logic based on the Resolution principle" [1965 paper that introduced Resolution]
I don't want to keep editing the above comment, so I'm starting a new one.
I really recommend that anyone with an interest in CS and AI read at least J. Alan Robinson's paper above. For me it really blew my mind when I finally found the courage to do it (it's old and a bit hard to read). I think there's a trope in wushu where someone finds an ancient scroll that teaches them a long-lost kung-fu and they become enlightened? That's how I felt when I read that paper, like I gained a few levels in one go.
Resolution is a unique gem of symbolic AI, one of its major achievements and a workhorse: used not only in Prolog but also in one of the two dominant branches of SAT-Solving (i.e. the one that leads from Hillary-Putnam to Conflict Driven Clause Learning) and even in machine learning, in of the two main branches of Inductive Logic Programming (which I study) and which is based on trying to perform induction by inverting deduction and so by inverting Resolution. There's really an ocean of knowledge that flows never-ending from Resolution. It's the bee's knees and the aardvark's nightgown.
I sincerely believe that the reason so many CS students seem to be positively traumatised by their contact with Prolog is that the vast majority of courses treat Prolog as any other programming language and jump straight to the peculiarities of the syntax and how to code with it, and completely fail to explain Resolution theorem proving. But that's the whole point of the language! What they get instead is some lyrical waxing about the "declarative paradigm", which makes no sense unless you understand why it's even possible to let the computer handle the control flow of your program while you only have to sort out the logic. Which is to say: because FOL is a computational paradigm, not just an academic exercise. No wonder so many students come off those courses thinking Prolog is just some stupid academic faffing about, and that it's doing things differently just to be different (not a strawman- actual criticism that I've heard).
In this day and age where confusion reigns about what even it means to "reason", it's a shame that the answer, that is to be found right there, under our noses, is neglected or ignored because of a failure to teach it right.
The way to learn a language is not via its syntax but by understanding the computation model and the abstract machine it is based on. For imperative languages this is rather simple and so we can jump right in and muddle our way to some sort of understanding. With Functional languages it is much harder (you need to know logic of functions) and is quite impossible with Logic languages (you need to know predicate logic) Thus we need to first focus on the underlying mathematical concepts for these categories of languages.
The Robert Kowalski paper Predicate Logic as a Programming Language you list above is the Rosetta stone of logic languages and an absolute must-read for everybody. It builds everything up from the foundations using implication (in disjunctive form), clause, clausal sentence, semantics, Horn clauses and computation (i.e. resolution derivation); all absolutely essential to understanding! This is the "enlightenment scroll" of Prolog.
I don't understand (the point of) your example. In all branches of the search `X > 5` will never be `true` so yeah `slow_computation` will not be reached. How does that relate to your point of it being "brute force"
>> but if it did, it would come up with the same result
Meaning either changing the condition or the order of the clauses. How do you expect Prolog to proceed to `slow_computation` when you have declared a statement (X > 5) that is always false before it.
The point is to compare a) evaluate all three lines (member, >5, slow_computation) then fail because the >5 test failed; against b) evaluate (member, >5) then fail. And to ask whether that's the mechanism YeGoblynQueyne is referring to. If so, is it valid to describe b as "the polar opposite" of a? They don't feel like opposites, merely an implementation detail performance hack. We can imagine some completely different strategy such as "I know from some other Constraint Logic propagation that slow_computation has no solutions so I don't even need to go as far as the X>5 test" which is "clever" not "brute".
> "How do you expect Prolog to proceed to `slow_computation` when you have declared a statement (X > 5) that is always false before it"
I know it doesn't, but there's no reason why it can't. In a C-like language it's common to do short-circuit Boolean logic evaluation like:
A && B && C
and if the first AND fails, the second is not tested. But if the language/implementation doesn't have that short-circuit optimisation, both tests are run, the outcome doesn't change. The short-circuit eval isn't the opposite of the full eval. And yes this is nitpicking the term "polar opposite of" but that's the relevant bit about whether something is clever or brute - if you go into every door, that's brute. If you try every door and some are locked, that's still brute. If you see some doors have snow up to them and you skip the ones with no footprints, that's completely different.
Prolog was introduced to capture natural language - in a logic/symbolic way that didn't prove as powerful as today's LLM for sure, but this still means there is a large corpus of direct English to Prolog mappings available for training, and also the mapping rules are much more straightforward by design. You can pretty much translate simple sentences 1:1 into Prolog clauses as in the classic boring example
% "the boy eats the apple"
eats(boy, apple).
This is being taken advantage of in Prolog code generation using LLMs. In the Quantum Prolog example, the LLM is also instructed not to generate search strategies/algorithms but just planning domain representation and action clauses for changing those domain state clauses which is natural enough in vanilla Prolog.
The results are quite a bit more powerful, close to end user problems, and upward in the food chain compared to the usual LLM coding tasks for Python and JavaScript such as boilerplate code generation and similarly idiosyncratic problems.
"large corpus" - large compared to the amount of Python on Github or the amount of JavaScript on all the webpages Google has ever indexed? Quantum Prolog doesn't have any relevant looking DuckDuckGo results, I found it in an old comment of yours here[1] but the link goes to a redirect which is blocked by uBlock rules and on to several more redirects beyond which I didn't get to a page. In your linked comment you write:
> "has convenient built-in recursive-decent parsing with backtracking built-in into the language semantics, but also has bottom-up parsing facilities for defining operator precedence parsers. That's why it's very convenient for building DSLs"
which I agree with, for humans. What I am arguing is that LLMs don't have the same notion of "convenient". Them dumping hundreds of lines of convoluted 'unreadable' Python (or C or Go or anything) to implement "half of Common Lisp" or "half of a Prolog engine" for a single task is fine, they don't have to read it, and it gets the same result. What would be different is if it got a significantly better result, which I would find interesting but haven't seen a good reason why it would.
This sparked a really fascinating discussion, I don't know if anyone will see this but thanks everyone for sharing your thoughts :)
I understand your point - to an LLM there's no meaningful difference between once turing complete language and another. I'll concede that I don't have a counter argument, and perhaps it doesn't need to be prolog - though my hunch is that LLM's tend to give better results when using purpose built tools for a given type of problem.
The only loose end I want to address is the idea of "doing reasoning."
This isn't an AGI proposal (I was careful to say "good at writing prolog") just an augmentation that (as a user) I haven't yet seen applied in practice. But neither have I seen it convincingly dismissed.
The idea is the LLM would act like an NLP parser that gradually populates a prolog ontology, like building a logic jail one brick at a time.
The result would be a living breathing knowledge base which constrains and informs the LLM's outputs.
The punchline is that I don't even know any prolog myself, I just think it's a neat idea.
Its a Horn clause resolver...that's exactly the kind of reasoning that LLMs are bad at. I have no idea how to graft Prolog to an LLM but if you can graft any programming language to it, you can graft Prolog more easily.
Also, that you push Python and JavaScript makes me think you don't know many languages. Those are terrible languages to try to graft to anything. Just because you only know those 2 languages doesn't make them good choices for something like this. Learn a real language Physicist.
> Those are terrible languages to try to graft to anything.
Web browsers, Blender, LibreOffice and Excel all use those languages for embedded scripting. They're fine.
> Just because you only know those 2 languages doesn't make them good choices for something like this.
You misunderstood my claim and are refuting something different. I said there is more training data for LLMs to use to generate Python and JavaScript, than Prolog.
I'm not. Python and JS are scripting languages. And in this case, we want something that models formal logic. We are hammering in a nail, you picked up a screwdriver and I am telling you to use a claw hammer.
What does this comment even mean? A claw hammer? By formal definitions, all 3 languages are Turing complete and can express programs of the same computational complexity.
Wrapping either the SWI prolog MQI, or even simpler an existing Python interface like like janus_swi, in a simple MCP is probably an easy weekend project. Tuning the prompting to get an LLM to reliably and effectively choose to use it when it would benefit from symbolic reasoning may be harder, though.
We would begin by having a Prolog server of some kind (I have no idea if Prolog is parallelized but it should very well be if we're dealing with Horn Clauses).
There would be MCP bindings to said server, which would be accessible upon request. The LLM would provide a message, it could even formulate Prolog statements per a structured prompt, and then await the result, and then continue.
> Its a Horn clause resolver...that's exactly the kind of reasoning that LLMs are bad at. I have no idea how to graft Prolog to an LLM but if you can graft any programming language to it, you can graft Prolog more easily.
By grafting LLM into Prolog and not other way around ?
Of course it does "reasoning", what do you think reasoning is? From a quick google: "the action of thinking about something in a logical, sensible way". Prolog searches through a space of logical proposition (constraints) and finds conditions that lead to solutions (if one exists).
(a) Trying adding another 100 or 1000 interlocking proposition to your problem. It will find solutions or tell you one doesn't exist.
(b) You can verify the solutions yourself. You don't get that with imperative descriptions of problems.
(b) Good luck sandboxing Python or JavaScript with the treat of prompt injection still unsolved.
Of course it doesn't "do reasoning", why do you think "following the instructions you gave it in the stupidest way imaginable" is 'obviously' reasoning? I think one definition of reasoning is being able to come up with any better-than-brute-force thing that you haven't been explicitly told to use on this problem.
Prolog isn't "thinking". Not about anything, not about your problem, your code, its implementation, or any background knowledge. Prolog cannot reason that your problem is isomorphic to another problem with a known solution. It cannot come up with an expression transform that hasn't been hard-coded into the interpreter which would reduce the amount of work involved in getting to a solution. It cannot look at your code, reason about it, and make a logical leap over some of the code without executing it (in a way that hasn't been hard-coded into it by the programmer/implementer). It cannot reason that your problem would be better solved with SLG resolution (tabling) instead of SLD resolution (depth first search). The point of my example being pseudo-Python was to make it clear that plain Prolog (meaning no constraint solver, no metaprogramming), is not reasoning. It's no more reasoning than that Python loop is reasoning.
If you ask me to find the largest Prime number between 1 and 1000, I might think to skip even numbers, I might think to search down from 1000 instead of up from 1. I might not come up with a good strategy but I will reason about the problem. Prolog will not. You code what it will do, and it will slavishly do what you coded. If you code counting 1-1000 it will do that. If you code Sieve of Eratosthenes it will do that instead.
Its a Horn clause interpreter. Maybe lookup what that is before commenting on it. Clearly you don't have a good grasp of Computer Science concepts or math based upon your comments here. You also don't seem to understand the AI/ML definition of reasoning (which is based in formal logic, much like Prolog itself).
Python and Prolog are based upon completely different kinds of math. The only thing they share is that they are both Turing complete. But being Turing complete isn't a strong or complete mathematical definition of a programming language. This is especially true for Prolog which is very different from other languages, especially Python. You shouldn't even think of Prolog as a programming language, think of it as a type of logic system (or solver).
Contrary to what everyone else is saying, I think you're completely correct. Using it for AI or "reasoning" is a hopeless dead end, even if people wish otherwise. However I've found that Prolog is an excellent language for expressing certain types of problems in a very concise way, like parsers, compilers, and assemblers (and many more). The whole concept of using a predicate in different modes is actually very useful in a pragmatic way for a lot of problems.
When you add in the constraint solving extensions (CLP(Z) and CLP(B) and so on) it becomes even more powerful, since you can essentially mix vanilla Prolog code with solver tools.
The reason why you can write parsers with Prolog is because you can cast the problem of determining whether a string belongs to a language or not as a proof, and, in Prolog, express it as a set of Definite Clauses, particularly with the syntactic sugar of Definite Clause Grammars that give you an executable grammar that acts as both acceptor and generator and is equivalent to a left-corner parser.
Now, with that in mind, I'd like to understand how you and the OP reconcile the ability to carry out a formal proof with the inability to do reasoning. How is it not reasoning, if you're doing a proof? If a proof is not reasoning, then what is?
Clearly people write parsers in C and C++ and Pascal and OCAML, etc. What does it mean to come in with "the reason you can write parsers with Prolog..."? I'm not claiming that reason is incorrect, I'm handwaving it away as irrelevant and academic. Like saying that Lisp map() is better than Python map() because Lisp map is based on formal Lambda Calculus and Python map is an inferior imitation for blub programmers. When a programmer maps a function over a list and gets a result, it's a distinction without a difference. When a programmer writes a getchar() peek() and goto state machine parser with no formalism, it works, what difference does the formalism behind the implementation practically make?
Yes maybe the Prolog way means concise code is easier for a human to tell whether the code is a correct expression of the intent, but an LLM won't look at it like that. Whatever the formalism brings, it isn't enough that every parser task is done in Prolog in the last 50 years. Therefore it isn't any particular interest or benefit, except academic.
> both acceptor and generator
Also academically interesting but practically useless due to the combinatorial explosion of "all possible valid grammars" after the utterly basic "aaaaabbbbbbbbbbbb" examples.
> "how you and the OP reconcile the ability to carry out a formal proof with the inability to do reasoning. How is it not reasoning, if you're doing a proof? If a proof is not reasoning, then what is?"
If drawing a painting is art, is it art if a computer pulls up a picture of a painting and shows it on screen? No. If a human coded the proof into a computer, the human is reasoning, the computer isn't. If the computer comes up with the proof, the computer is reasoning. Otherwise you're in a situation where dominos falling over is "doing reasoning" because it can be expressed formally as a chain of connected events where the last one only falls if the whole chain is built properly, and that's absurdum.
> If a human coded the proof into a computer, the human is reasoning, the computer isn't. ... If the computer comes up with the proof, the computer is reasoning.
That is exactly what "formal logic programming" is all about. The machine is coming up with the proof for your query based on the facts/rules given by you. Therefore it is a form of reasoning.
Reasoning (cognitive thinking) is expressed as Arguments (verbal/written premises-to-conclusions) a subset of which are called Proofs (step-by-step valid arguments). Using Formalization techniques we have just pushed some of those proof derivations to a machine.
With Prolog, the proof is carried out by the computer, not a human. A human writes up a theory and a theorem and the computer proves the theorem with respect to the theory. So I ask again, how is carrying out a proof not reasoning?
>> I'm not claiming that reason is incorrect, I'm handwaving it away as irrelevant and academic.
The word "reason" came into this thread with the original comment:
3. LLMs are bad at solving reasoning problems.
4. Prolog is good at solving reasoning problems.
I agree with you. In Prolog "?- 1=1." is reasoning by definition. Then 4. becomes "LLMs should emit Prolog because Prolog is good at executing Prolog code".
I think that's not a useful place to be, so I was trying to head off going there. But now I'll go with you - I agree it IS reasoning - can you please support your case that "executing Prolog code is reasoning" makes Prolog more useful for LLMs to emit than Python?
But I was mainly asking why you say that Prolog's execution is "not reasoning". I don't understand what you mean that '"?- 1=1." is reasoning by definition' and how that ties-in with our discussion about Prolog reasoning or not.
"?- 1=1." is Prolog code. Executing Prolog code is reasoning. Therefore that is reasoning. Q.E.D. This is the point you refused to move on from until I agreed. So I agreed. So we could get back to the interesting topic.
A topic you had no interest in, only interest dragging onto a trangent and grinding it down to make ... what point, exactly? If "executing Prolog code" is reasoning, then what? I say it isn't useful to call it reasoning (in the context of this thread) because it's too broad to be a helpful definition, basically everything is reasoning, and almost nothing is not. When I tried to say in advance that this wouldn't be a useful direction and I didn't want to go here, you said it was " not a great way to have a discussion". And now having dragged me off onto this academic tangent, you dismiss it as "I wasn't interested in that other topic anyway". Annoying.
I'm sorry you find my contribution to the discussion annoying, but how should I feel if you just "agree" with me as a way to get me to stop arguing?
But I think your annoyance may be caused by misunderstanding my argument. For example:
>> If "executing Prolog code" is reasoning, then what? I say it isn't useful to call it reasoning (in the context of this thread) because it's too broad to be a helpful definition, basically everything is reasoning, and almost nothing is not.
Everything is not reasoning, nor is executing any code reasoning, but "executing Prolog code" is, because executing Prolog code is a special case of executing code. The reason for that is that Prolog's interpreter is an automated theorem prover, therefore executing Prolog code is carrying out a proof; in an entirely literal and practical sense, and not in any theoretical or abstract sense. And it is very hard to see how carrying out a proof automatically is "not reasoning".
I made this point in my first comment under yours, here:
The same clearly does not apply to Python, because its interpreter is not an automated theorem prover; it doesn't apply to javascript because its interpreter is not an automated theorem prover; it doesn't apply to C because its compiler is not an automated theorem prover; and so on, and so forth. Executing code in any of those languages is not reasoning, except in the most abstract and, well, academic, sense, e.g. in the context of the Curry-Howard correspondence. But not in the practical, down-to-brass-tacks way it is in Prolog. Calling what Prolog does reasoning is not a definition of reasoning that's too broad to be useful, as you say. On the contrary, it's a very precise definition of reasoning that applies to Prolog but not to most other programming languages.
I think you misunderstand this argument and as a consequence fail to engage with it and then dismiss it as irrelevant because you misunderstand it. I think you should really try to understand it, because it's obvious you have some strong views on Prolog which are not correct, and you might have the chance to correct them.
I absolutely have an interest in any claim that generating Prolog code with LLMs will fix LLMs' inability to reason. Prolog is a major part of my programming work and research.
> "?- 1=1." is Prolog code. Executing Prolog code is reasoning. Therefore that is reasoning. Q.E.D.
This is the dumbest thing i have read yet on HN. You are absolutely clueless about this topic and are merely arguing for argument's sake.
> If "executing Prolog code" is reasoning, then what? I say it isn't useful to call it reasoning (in the context of this thread) because it's too broad to be a helpful definition, basically everything is reasoning, and almost nothing is not.
What does this even mean? It has already been pointed out that Prolog does a specific type of formalized reasoning which is well understood. The fact that there are other formalized models to tackle subdomains of "Commonsense Reasoning" does not detract from the above. That is why folks are trying to marry Prolog (predicate logic) to LLMs (mainly statistical approaches) to get the best of both worlds.
User "YeGoblynQueenne" was being polite in his comments but for some reason you willfully don't want to understand and have come up with ridiculous examples and comments which only reflect badly on you.
You call it the dumbest thing you have ever read, and say that I know nothing - but you agree that it is a correct statement ("Prolog does a specific type of formalized reasoning").
> "What does this even mean?"
For someone who is so eager to call comments dumb, you sure have a lot of not-understanding going on.
1. Someone said "Prolog is good at reasoning problems"
2. I said it isn't any better than other languages.
3. Prolog people jumped on me because Ackchually Technickally everything Prolog does is 'reasoning' hah gotcha!
4. I say that is entirely unrelated to the 'reasoning' in "Prolog is good at reasoning problems". I demonstrate this by reductio ad absurdum - if executing "?- 1=1." is "reasoning" then it's absurd for the person to be saying that definition is a compelling reason to use Prolog, therefore they were not saying that, therefore this whole tangent about whether some formalism is or isn't reasoning by some academic definition is irrelevant to the claim and counter claim.
> "are merely arguing for argument's sake."
Presumably you are arguing for some superior purpose?
The easiest way for you to change my mind is to demonstrate literally anything that is better for an LLM to emit in Prolog than Python - given the condition that LLMs don't have to care about conciseness or expressivity or readability in the same way humans do. For one example I say it would no better for an LLM to solve an Einstein Puzzle one way or the other. The fact that you can't or won't do this, and prefer insults, is not changing my mind nor is it educating me in anything.
You edited your comment without any indication tags which is dishonest. However, my previous response at https://news.ycombinator.com/item?id=45939440 is still valid. This is an addendum to that;
> The easiest way for you to change my mind is to demonstrate literally anything that is better for an LLM to emit in Prolog than Python
I have no interest in trying to change your mind since you simply do not have the first idea about what Prolog is doing vis-a-vis any other non-logic programming language. You have to have some basic knowledge before we can have a meaningful discussion.
However, in my previous comment here https://news.ycombinator.com/item?id=45712934 i link to some usecases from others. In particular; the casestudy from user "bytebach" is noteworthy and explains exactly what you are asking for.
> The fact that you can't or won't do this, and prefer insults, is not changing my mind nor is it educating me in anything.
This is your dishonest edit without notification. I refuse to suffer wilful stupidity and hence retorted in a pointed manner; that was the only way left to get the message across. We had given you enough data/pointers in our detailed comments none of which you seem to have even grasped nor looked into. In a forum like this, if we are to learn from each other, both parties must put forth effort to understand the other side and articulate one's own position clearly. You have failed on both counts in this thread.
No, i did not; do not twist nor misrepresent my words. Your example had nothing whatsoever to do with "Reasoning" and hence i called it dumb.
> you sure have a lot of not-understanding going on.
Your and my comments are there for all to see. Your comments are evidence that you are absolutely clueless on Reasoning, Logic Programming Approaches and Prolog.
> 1. Someone said "Prolog is good at reasoning problems"
Which is True. But it is up to you to present the world-view to Prolog in the appropriate Formal manner.
> 2. I said it isn't any better than other languages.
Which is stupid. This single statement establishes the fact that you know nothing about Logic Programming nor the aspect of Predicate Logic it is based on.
> 3. Prolog people jumped on me because Ackchually Technickally everything Prolog does is 'reasoning' hah gotcha!
Which is True and not a "gotcha". You have no definite understanding of what the word "Reasoning" means in the context of Prolog. We have explained concepts and pointed you to papers none of which you are interested in studying nor understanding.
> 4. I say that is entirely unrelated to the 'reasoning' in "Prolog is good at reasoning problems". I demonstrate this by reductio ad absurdum - if executing "?- 1=1." is "reasoning" then it's absurd for the person to be saying that definition is a compelling reason to use Prolog, therefore they were not saying that, therefore this whole tangent about whether some formalism is or isn't reasoning by some academic definition is irrelevant to the claim and counter claim.
What does this even mean? This is just nonsense verbiage.
> Presumably you are arguing for some superior purpose?
Yes. I am testing my understanding of Predicate Logic/Logic Programming/Prolog against others. Also whether others have come up with better ways of application in this era of LLMs i.e. what are the different ways to use Prolog with LLMs today?.
I initially thought you were probably wanting a philosophical discussion of what "Reasoning" means and hence pointed to some relevant articles/papers but i am now convinced you have no clue about this entire subject and are really making up stuff as you go.
You are wasting everybody's time, testing their patience and coming across as totally ignorant on this domain.
Even in your example (which is obviously not correct representation of prolog), that code will work X orders magnitude faster and with 100% reliability compared to much more inferior LLM reasoning capabilities.
Algorithmically there's nothing wrong with using BFS/DFS to do reasoning as long as the logic is correct and the search space is constrained sufficiently. The hard part has always been doing the constraining, which LLMs seem to be rather good at.
Because I can solve problems that would take the age of the universe to brute force, without waiting the age of the universe. So can you: start counting at 1, increment the counter up to 10^8000, then print the counter value.
The brain can still use other means of working in addition to brute forcing solutions. For example, how would you go about solving the chess puzzle of eight queens that doesn't involve going through the potential positions and then filtering out the options that don't match the criteria for the solution?
Prolog can also evaluate mathematical expressions directly as well.
There's a whole lot of undecidable (or effectively undecidable) edge cases that can be adequately covered. As a matter of fact, Decidability Logic is compatible with Prolog.
I can absolutely try this. Doesn't mean i'll solve it. If i solve it there's no guarantee i'll be correct. Math gets way harder when i don't have a legitimate need to do it. This falls in the "no legit need" so my mind went right to "100 * 70, good enough."
Yeah, read it in another comment. Why do you think doing calculations in your head is brute-forcing? Many people can do it flawlessly, without even knowing of these "tricks". They just know. Is that brute-force?
Your original comment completely missed the point of what it was replying to, you phrased it like you were correcting them but you were actually in agreement and didn't seem to realize it. They tried to clarify when you asked and when you responded you assumed they had the opposite viewpoint from what they actually have.
> how is calculating it in your own head brute-force
Doing this calculation in your head is not brute force, which was their entire point. Their math question was an example of why the brain isn't brute-forcing solutions, replying to this:
> What makes you think your brain isn't also brute forcing potential solutions subconciously and only surfacing the useful results?
Just note: human pattern matching is not Haskell/Erlang/ML pattern matching. It doesn't go [1] through all possible matches of every possible combination of all available criteria
[1] If it does, it's the most powerful computing device imaginable.
There are hundreds of trillions of synapses in the brain, and much of what they do (IANANS) could reasonably be described as pattern matching: mostly sitting idle waiting for patterns. (Since dendritic trees perform a lot of computation (for example, combining inputs at each branch), if you want to count the number of pattern matchers in the branch you can't just count neurons. A neuron can recognise more than one pattern.)
So yes, thanks to its insanely parallel architecture, the brain is also an insanely brute force pattern matcher, constantly matching against who knows how many trillions of previously seen patterns. (BTW IMHO this is why LLMs work so well)
(I do recognise the gap in my argument: are all those neurons actually receiving inputs to match against, or are they 'gated'? But we're really just arguing about semantics of applying "brute force", a CS term, to a neural architecture, where it has no definition.)
Everything you've written here is an invalid over-reduction, I presume because you aren't terribly well versed with Prolog. Your simplification is not only outright erroneous in a few places, but essentially excludes every single facet of Prolog that makes it a turing complete logic language. What you are essentially presenting Prolog as would be like presenting C as a language where all you can do is perform operations on constants, not even being able to define functions or preprocessor macros. To assert that's what C is would be completely and obviously ludicrous, but not so many people are familiar enough with Prolog or its underlying formalisms to call you out on this.
Firstly, we must set one thing straight: Prolog definitionally does reasoning. Formal reasoning. This isn't debatable, it's a simple fact. It implements resolution (a computationally friendly inference rule over computationally-friendly logical clauses) that's sound and refutation complete, and made practical through unification. Your example is not even remotely close to how Prolog actually works, and excludes much of the extra-logical aspects that Prolog implements. Stripping it of any of this effectively changes the language beyond recognition.
> Plain Prolog's way of solving reasoning problems is effectively:
No. There is no cognate to what you wrote anywhere in how Prolog works. What you have here doesn't even qualify as a forward chaining system, though that's what it's closest to given it's somewhat how top-down systems work with their ruleset. For it to even approach a weaker forward chaining system like CLIPS, that would have to be a list of rules which require arbitrary computation and may mutate the list of rules it's operating on. A simple iteration over a list testing for conditions doesn't even remotely cut it, and again that's still not Prolog even if we switch to a top-down approach by enabling tabling.
> You hard code some options
A Prolog knowledgebase is not hardcoded.
> write a logical condition with placeholders
A horn clause is not a "logical condition", and those "placeholders" are just normal variables.
> and Prolog brute-forces every option in every placeholder.
Absolutely not. It traverses a graph proving things, and when it cannot prove something it backtracks and tries a different route, or otherwise fails. This is of course without getting into impure Prolog, or the extra-logical aspects it implements. It's a fundamentally different foundation of computation which is entirely geared towards formal reasoning.
> And that might have been nice compared to Pascal in 1975, it's not so different to modern garbage collected high level scripting languages.
It is extremely different, and the only reason you believe this is because you don't understand Prolog in the slightest, as indicated by the unsoundness of essentially everything you wrote. Prolog is as different from something like Javascript as a neural network with memory is.
The original suggestion was that LLMs should emit Prolog code to test their ideas. My reply was that there is nothing magic in Prolog which would help them over any other language, but there is something in other languages which would help them over Prolog - namely more training data. My example was to illustrate that, not to say Prolog literally is Python. Of course it's simplified to the point of being inaccurate, it's three lines, how could it not be.
> "A Prolog knowledgebase is not hardcoded."
No, it can be asserted and retracted, or consult a SQL database or something, but it's only going to search the knowledge the LLM told it to - in that sense there is no benefit to an LLM to emit Prolog over Python since it could emit the facts/rules/test cases/test conditions in any format it likes, it doesn't have any attraction to concise, clean, clear, expressive, output.
> "those "placeholders" are just normal variables"
Yes, just normal variables - and not something magical or special that Prolog has that other languages don't have.
> "Absolutely not. It traverses a graph proving things,"
Yes, though, it traverses the code tree by depth first walk. If the tree has no infinite left-recursion coded in it, that is a brute force walk. It proves things by ordinary programmatic tests that exist in other languages - value equality, structure equality, membership, expression evaluation, expression comparison, user code execution - not by intuition, logical leaps, analogy, flashes of insight. That is, not particularly more useful than other languages which an LLM could emit.
> "Your example is not even remotely close to how Prolog actually works"
> "There is no cognate to what you wrote anywhere in how Prolog works"
> "It is extremely different"
Well:
parent(timmy, sarah).
person(brian).
person(anna).
person(sarah).
person(john).
?- person(X), writeln(X), parent(timmy, X).
brian
anna
sarah
X = sarah
That's a loop over the people, filling in the variable X. Prolog is not looking at Ancestry.com to find who Timmy's parents are. It's not saying "ooh you have a SQLite database called family_tree I can look at". That it's doing it by a different computational foundation doesn't seem relevant when that's used to give it the same abilities.
My point is that Prolog is "just" a programming language, and not the magic that a lot of people feel like it is, and therefore is not going to add great new abilities to LLMs that haven't been discovered because of Prolog's obscurity. If adding code to an LLM would help, adding Python to it would help. If that's not true, that would be interesting - someone should make that case with details.
> "and the only reason you believe this is because you don't understand Prolog in the slightest"
This thread would be more interesting to everybody if you and hunterpayne would stop fantasizing about me, and instead explain why Prolog's fundamentally different foundation makes it a particularly good language for LLMs to emit to test their other output - given that they can emit virtually endless quantities of any language, custom writing any amount of task-specific code on the fly.
The discussion has become contentious and that's very unfortunate because there's clearly some confusion about Prolog and that's always a great opportunity to learn.
You say:
>> Yes, though, it traverses the code tree by depth first walk.
Here's what I suggest: try to think what, exactly, is the data structure searched by Depth First Search during Prolog's execution.
You'll find that this structure is what we call and SLD-Tree. That's a tree where the root is a Horn goal that begins the proof (i.e. the thing we want to dis-prove, since we're doing a proof by refutation); every other node is a new goal derived during the proof; every branch is a Resolution step between one goal and one definite program clause from a Prolog program; and every leaf of a finite branch is either the empty clause, signalling the success of the proof by refutation, or a non-empty goal that can not be further reduced, which signals the failure of the proof. So that's basically a proof tree and the search is ... a proof.
So Prolog is not just searching a list to find an element, say. It's searching a proof tree to find a proof. It just so happens that searching a proof tree to find a proof corresponds to the execution of a program. But while you can use a search to carry out a proof, not every search is a proof. You have to get your ducks in a row the right way around otherwise, yeah, all you have is a search. This is not magick, it's just ... computer science.
It should go without saying that you can do the same thing with Python, or with javascript, or with any other Turing-complete language, but then you'd basically have to re-invent Prolog, and implement it in that other language; an ad-hoc, informally specified, bug-ridden and slow implementation of half of Prolog, most like.
This is all without examining whether you can fix LLMs' lack of reasoning by funneling their output through a Prolog interpreter. I personally don't think that's a great idea. Let's see, what was that soundbite... "intelligence is shifting the test part of generate-test into the generate part" [1]. That's clearly not what pushing LLM output into a Prolog interpreter achieves. Clearly, if good, old-fashioned symbolic AI has to be combined with statistical language modelling, that has to happen much earlier in the statistical language modelling process. Not when it's already done and dusted and we have a language model; which is only statistical. Like putting the bubbles in the soda before you serve the drink, not after, the logic has to go into the language modelling before the modelling is done, not after. Otherwise there's no way I can see that the logic can control the modelling. Then all you have is generate-and-test, and it's meh as usual. Although note that much recent work on carrying out mathematical proofs with LLMs does exactly that, e.g. like DeepMind's AlphaProof. Generate-and-test works, it's just dumb and inefficient and you can only really make it work if you have the same resources as DeepMind and equivalent.
The way to look at this is first to pin down what we mean when we say Human Commonsense Reasoning (https://en.wikipedia.org/wiki/Commonsense_reasoning). Obviously this is quite nebulous and cannot be defined precisely but OG AI researchers have a done a lot to identify and formalize subsets of Human Reasoning so that it can be automated by languages/machines.
Prolog implements a language to logically interpret only within a formalized subset of human reasoning mentioned above. Now note that all our scientific advances have come from our ability to formalize and thus automate what was previously only heuristics. Thus if i were to move more of real-world heuristics (which is what a lot of human reasoning consists of) into some formal model then Prolog (or say LLMs) can be made to better reason about it.
Note however the paper beautifully states at the end;
Prolog itself is all form and no content and contains no knowledge. All the tasks, such as choosing a vocabulary of symbols to represent concepts and formulating appropriate sentences to represent knowledge, are left to the users and are obviously domain-dependent. ... For each particular application, it will be necessary to provide some domain-dependent information to guide the program writing. This is true for any formal languages. Knowledge is power. Any formalism provides us with no help in identifying the right concepts and knowledge in the first place.
So Real-World Knowledge encoded into a formalism can be reasoned about by Prolog. LLMs claim to do the same on unstructured/non-formalized data which is untenable. A machine cannot do "magic" but can only interpret formalized/structured data according to some rules. Note that the set of rules can be dynamically increased by ML but ultimately they are just rules which interact with one another in unpredictable ways. Now you can see where Prolog might be useful with LLMs. You can impose structure on the view of the World seen by the LLM and also force it to confine itself only to the reasoning it can do within this world-view by asking it to do predominantly Prolog-like reasoning but you don't turn the LLM into just a Prolog interpreter. We don't know how it interacts with other heuristics/formal reasoning parts (eg. reinforcement learning) of LLMs but does seem to give better predictable and more correct output. This can then be iterated upon to get a final acceptable result.
The core idea of DeepClause is to use a custom Prolog-based DSL together with a metainterpreter implemented in Prolog that can keep track of execution state and implicitly manage conversational memory for an LLM. The DSL itself comes with special predicates that are interpreted by an LLM. "Vague" parts of the reasoning chain can thus be handed off to a (reasonably) advanced LLM.
Would love to collect some feedback and interesting ideas for possible applications.
IIRC IBM’s Watson (the one that played Jeopardy) used primitive NLP (imagine!) to form a tree of factual relations and then passed this tree to construct Prolog queries that would produce an answer to a question. One could imagine that by swapping out the NLP part with an LLM, the model would have 1. a more thorough factual basis against which to write Prolog queries and 2. a better understanding of the queries it should write to get at answers (for instance, it may exploit more tenuous relations between facts than primitive NLP).
Not so "primitive" NLP. Watson started with what its team called a "shallow parse" of a sentence using a dependency grammar and then matched the parse to an ontology consisting of good, old fashioned frames [1]. That's not as "advanced" as an LLM but far more reliable.
I believe the ontology was indeed implemented in Prolog but I forget the architecture details.
We've done this, and it works. Our setup is to have some agents that synthesize Prolog and other types of symbolic and/or probabilistic models. We then use these models to increase our confidence in LLM reasoning and iterate if there is some mismatch. Making synthesis work reliably on a massive set of queries is tricky, though.
Imagine a medical doctor or a lawyer. At the end of the day, their entire reasoning process can be abstracted into some probabilistic logic program which they synthesize on-the-fly using prior knowledge, access to their domain-specific literature, and observed case evidence.
There is a growing body of publications exploring various aspects of synthesis, e.g. references included in [1] are a good starting point.
I am once again shilling the idea that someone should find a way to glue Prolog and LLMs together for better reasoning agents.
There are definitely people researching ideas here. For my own part, I've been doing a lot of work with Jason[1], a very Prolog like logic language / agent environment with an eye towards how to integrate that with LLMs (and "other").
Nothing specific / exciting to share yet, but just thought I'd point out that there are people out there who see potential value in this sort of thing and are investigating it.
>>Prolog doesn't look like javascript or python so:
Think of this way. In Python and Javascript you write code, and to test if its correct you write unit test cases.
A prolog program is basically a bunch of test cases/unit test cases, you write it, and then tell the Prolog compiler, 'write code, that passes these test cases'.
That is, you are writing the program specification, or tests that if pass would represent solution to the problem. The job of the compiler to write the code that passes these test cases.
It's been a while since I have done web dev, but web devs back then were certainly not scared of any language. Web devs are like the ultimate polyglots. Or at least they were. I was regularly bouncing around between a half dozen languages when I was doing pro web dev. It was web devs who popularized numerous different languages to begin with simply because delivering apps through a browser allowed us a wide variety of options.
No web dev I have ever met could use Prolog well. I think your statement about web devs being polyglots is based upon the fact that web devs chase every industry fad. I think that has a lot to do with the nature and economics of web dev work (I'm not blaming the web devs for this). I mean the best way to succeed as a webdev is to write your own version of a framework that does the same thing as the last 10 frameworks but with better buzzword marketing.
Generally speaking, all the languages they know are pretty similar to each other. Bolting on lambdas isn't the same as doing pure FP. Also, anytime a problem comes up where you would actually need a weird language based upon different math, those problems will be assigned to some other kind of developer (probably one with a really strong CS background).
That you haven’t met any webdevs using prolog probably is because 1) prolog is a very rare language among devs in general not just webdevs (unless you count people that did prolog in a course 20 years ago and remember nothing) 2) prolog just isn’t that focused on webdev (like saying ”not many embedded devs know react so I guess it is because react is too hard for them”)
Maybe they were, but these days everything must be in JS syntax. Even if it is longer than pure CSS, they want the CSS inside JS syntax. They are only ultimate polyglot as long as all the languages are actually JS.
(Of course this is an overgeneralization, since obviously, there are web developers, who do still remember how to do things in HTML, CSS and, of course JS.)
As someone who did deep learning research 2017-2023, I agree. "Neurosymbolic AI" seems very obvious, but funding has just been getting tighter and more restrictive towards the direction of figuring out things that can be done with LLMs. It's like we collectively forgot that there's more than just txt2txt in the world.
YES! I've run a few experiments on classical logic problems and an LLM can spit out Prolog programs to solve the puzzel. Try it yourself, ask an LLM to write some prolog to solve some problem and then copy paste it to https://swish.swi-prolog.org/ and see if it runs.
Can't find the links right now, but there were some papers on llm generating prolog facts and queries to ground the reasoning part. Somebody else might have them around.
There's a lot of work in this area. See e.g., the LoRP paper by Di et al. There's also a decent amount of work on the other side too, i.e., using LLMs to convert Prolog reasoning chains back into natural language.
I've been thinking a lot about this, and I want to build the following experiment, in case anyone is interested:
The experiment is about putting an LLM to play plman[0] with and without prolog help.
plman is a pacman like game for learning prolog, it was written by profesor Francisco J. Gallego from Alicante University to teach logic subject in computer science.
Basically you write solution in prolog for a map, and plman executes it step by step so you can see visually the pacman (plman) moving around the maze eating and avoiding ghost and other traps.
There is an interesting dynamic about finding keys for doors and timing based traps.
There are different levels of complexity, and you can also write easily your maps, since they are just ascii characters in a text file.
I though this was the perfect project to visually explain my coworkers the limit of LLM "reasoning" and what is symbolic reasoning.
So far I hooked ChatGPT API to try to solve scenarios, and it fails even with substancial amount of retries. That's what I was expecting.
The next thing would be to write a mcp tool so that the LLM can navigate the problem by using the tool, but here is where I need guidance.
I'm not sure about the best dynamic to prove the usefulness of prolog in a way that goes beyond what context retrieval or db query could do.
I'm not sure if the LLM should write the prolog solution. I want to avoid to build something trivial like the LLM asking for the steps, already solved, so my intuition is telling me that I need some sort of virtual joystick mcp to hide prolog from the LLM, so the LLM could have access to the current state of the screen, and questions like what would be my position if I move up ?
What's the position of the ghost in next move ? where is the door relative to my current position ?
I don't have academic background to design this experiment properly. Would be great if anyone is interested to work together on this, or give me some advice.
Prior work pending on my reading list:
- LoRP: LLM-based Logical Reasoning via Prolog [1]
- A Pipeline of Neural-Symbolic Integration to Enhance Spatial Reasoning in Large Language Models [2]
Prolog really is such a fantastic system, if I can justify its usage then I won't hesitate to do so. Most of the time I'll call a language that I find to be powerful a "power tool", but that doesn't apply here. Prolog is beyond a power tool. A one-off bit of experimental tech built by the greatest minds of a forgotten generation. You'd it find deep in irradiated ruins of a dead city, buried far underground in a bunker easily missed. A supercomputer with the REPL's cursor flickering away in monochrome phosphor. It's sitting there, forgotten. Dutifully waiting for you to jack in.
When I entered university for my Bachelors, I was 28 years old and already worked for 5 or 6 years as a self-taught programmer in the industry. In the first semester, we had a Logic Programming class and it was solely taught in Prolog.
At first, I was mega overwhelmed. It was so different than anything I did before and I had to unlearn a lot of things that I was used to in "regular" programming. At the end of the class, I was a convert! It also opened up my mind to functional programming and mathematical/logical thinking in general.
I still think that Prolog should be mandatory for every programmer. It opens up the mind in such a logical way... Love it.
Unfortunately, I never found an opportunity in my 11 years since then to use it in my professional practice. Or maybe I just missed the opportunities?????
Did they teach you how to use DCGs? A few months ago I used EDCGs as part of a de-spaghettification and bug fixing effort to trawl a really nasty 10k loc sepples compilation unit and generate tags for different parts of it. Think ending up with a couple thousand ground terms like:
tag(TypeOfTag, ParentFunction, Line).
Type of tag indicating things like an unnecessary function call, unidiomatic conditional, etc.
I then used the REPL to pull things apart, wrote some manual notes, and then consulted my complete knowledgebase to create an action plan. Pretty classical expert system stuff. Originally I was expecting the bug fixing effort to take a couple of months. 10 days of Prolog code + 2 days of Prolog interaction + 3 days of sepples weedwacking and adjusting the what remained in the plugboard.
Prolog is a great language to learn. But I wouldn't want to use it for anything more than what its directly good at. Especially the cut operator, that's pretty mind bending. But once you get good at it, it all just flows. But I doubt more than 1% of devs could ever master it, even on an unlimited timeline. Its just much harder than any other type of non-research dev work.
But some parts, like e.g. the cut operator is something I've copied several times over for various things. A couple of prototype parser generators for example - allowing backtracking, but using a cut to indicate when backtracking is an error can be quite helpful.
That may make sense for Prolog code - I don't know Prolog enough to say. But the places I like to use it, it significantly simplified code by letting me write grammars with more local and specific error reporting.
That is, instead of continuing to backtrack, I'd use a cut-like operator to say "if you backtrack past this, then the error is here, and btw. (optionally) here is a nicer error message".
This could of course alter semantics. E.g. if I had a rule "expr ::= (foo ! bar) | (foo baz), foo baz would never get satisfied, whereas with "expr ::= (foo bar) | (foo baz)" it could. (And in that example, it'd be totally inappropriate in my parser generator too)
I'm guessing the potential to have non-local effects on the semantics is why you'd consider it problematic in Prolog? I can see it would be problematic if the cut is hidden away from where it would affect you.
In my use, the grammar files would typically be a couple of hundred lines at most, and the grammar itself well understood, and it was used explicitly to throw an error, so you'd instantly know.
There are (at least) two ways of improving on that, which I didn't bother with: I could use it to say "push the error message and location" and pop those errors if a given subtree of the parse was optional. Or I could validate that these operators don't occur in rules that are used in certain ways.
But in practice in this use I never ended up with big enough code that it seemed worth it, and would happily litter the grammars with lots of them.
I used to use a cut operator about every 2 to 4 rules. If you are constantly using it as error handling, I would agree you are using it too often. If you are using it to turn sets into scalars or cells, then you are using it correctly. It just makes the code really hard to reason about and maintain.
There was a time when the thinking was you can load all the facts into a prolog engine and it would replace experts like doctors and engineers - expert systems, it didn't work. Now its a curiosity
My prolog anecdote: ~2001 my brother and I writing an A* pathfinder in prolog to navigate a bot around the world of Asheron's Call (still the greatest MMORPG of all time!). A formative experience in what can be done with code. Others had written a plugin system (called Decal) in C for the game and a parser library for the game's terrain file format. We took that data and used prolog to write an A* pathfinder that could navigate the world, avoiding un-walkable terrain and even using the portals to shortcut between locations. Good times.
There seems to an interesting difference between Prolog and conventional (predicate) logic.
In Prolog, anything that can't be inferred from the knowledge base is false. If nothing about "playsAirGuitar(mia)" is implied by the knowledge base, it's false. All the facts are assumed to be given; therefore, if something isn't given, it must be false.
Predicate logic is the opposite: If I can't infer anything about "playsAirGuitar(mia)" from my axioms, it might be true or false. It's truth value is unknown. It's true in some model of the axioms, and false in others. The statement is independent of the axioms.
Deductive logic assumes an open universe, Prolog a closed universe.
It's not really false I think. It's 'no', which is an answer to a question "Do I know this to be true?"
I think there should be a room for three values there: true, unprovable, false. Where false things are also unprovable. I wonder if Prolog has false, defined as "yes" of the opposite.
> It's not really false I think. It's 'no', which is an answer to a question "Do I know this to be true?"
I don't think so, because in this case both x and not-x could be "no", but I think in Prolog, if x is "no", not-x is "yes", even if neither is known to be true. It's not a three-valued logic that doesn't adhere to the law of the excluded middle.
If x is "no" (I do not know this to be true) then not-x is "yes" (I do know this to be true). So negation still works as usual.
"Yes" is not "true" but rather "provably true". And "no" is not "false" but rather "not provably true".
Third sensible value in this framework (which I think Prolog doesn't have) would be "false" meaning "it's provably false" ("the opposite of it is provably true").
To be frank I think Prolog in newer implementations completely abandoned this nuance and just call states "true" and "false" instead of "yes" and "no".
> If x is "no" (I do not know this to be true) then not-x is "yes" (I do know this to be true). So negation still works as usual.
As I said though, that doesn't make sense. Because if I don't know x to be true because it is not mentioned in the knowledge base, I also don't know not-x to be true. So both would have to be "no". But they aren't. Therefore the knowledge interpretation is incorrect. Knowledge wouldn't be closed under negation. If you don't know something to be true, that doesn't imply that you know it to be false.
I recently implemented an eagerly evaluated embedded Prolog dialect in Dart for my game applications. I used SWI documentation extensively to figure out what to implement.
But I think I had the most difficulty designing the interface between the logic code and Dart. I ended up with a way to add "Dart-defined relations", where you provide relations backed dynamically by your ECS or database. State stays in imperative land, rules stay in logic land.
Testing on Queens8, SWI is about 10,000 times faster than my implementation. It's a work of art! But it doesn't have the ease of use in my game dev context as a simple Dart library does.
I only read the first 88 pages of Prolog Programming in Depth but I found it to be the best introductory book for programming in Prolog because it presents down to earth examples of coding like e.g. reading a file, storing data. Most other books are mainly or only focused on the pure logic stuff of Prolog but when you program you need more.
Another way of getting stuff done would be to use another programming language with its standard library (with regex, networking, json, ...) and embed or call Prolog code for the pure logic stuff.
I've recently started modeling some of my domains/potential code designs in Prolog. I'm not that advanced. I don't really know Prolog that well. But even just using a couple basic prolog patterns to implement a working spec in the 'prolog way' is *unbelievably* useful for shipping really clean code designs to replace hoary old chestnut code. (prolog -> ruby)
I keep wishing for "regex for prolog", ie: being able to (in an arbitrary language) express some functional bits in "prolog-ish", and then be able to ask/query against it.
let prologBlob = new ProLog()
prologBlob.add( "a => b" ).add( "b => c" )
prologBlob.query( "a == c?" ) == True
(not exactly that, but hopefully you get the gist)
There's so much stuff regarding constraints, access control, relationship queries that could be expressed "simply" in prolog and being able to extract out those interior buts for further use in your more traditional programming language would be really helpful! (...at least in my imagination ;-)
While usually using native syntax rather than strings, somethign like that exists for most languages of any popularity (and many obscure ones), in the form of miniKanren implementations.
If you really want something that takes Prolog strings instead (and want the full power of prolog), then there are bindings to prolog interpreters from many languages, and also SWI-Prolog specifically provides a fairly straightforward JSON-based server mode "Machine Query Interface" that should be fairly simple to interface with any language.
I've wished for the same kind of 'embed prolog in my ruby' for enumerating all possible cases, all invalid cases, etc in test suites. Interesting to know it's not just me!
I did try ruby-prolog. The deeper issue is that its just not prolog. Writing in actual prolog affords a lot of clarity and concision which would be quite noisy in ruby-prolog. To me, the difference was stark enough it wasn't worth any convenience already being in ruby was worth.
I wonder if there's examples of whole product architectures done in Prolog, seems like an elegant solution if done right. I've been looking for a concise way to model full architectures of my various projects, without relying on having a typical markdown file.
Which is separate from the actual types in the code.
Which is separate from the deployment section of the docs.
I studied prolog back in 2014. It was used in AI course. I found it very confusing: trying to code A*, N-Queens, or anything in it was just too much.
Python, in contrast, was a god-send.
I failed the subject twice in my MSc (luckily passing the MSc was based on the total average), but did a similar course in UC Berkeley, with python: aced it, loved it, and learned a lot.
A similar thing happened at my university in an Advanced Algorithms course. Students failed it so much, the university was forced to make the course easier to pass, by removing the minimum grade to pass.
I believe your case (and many other students) is that you couldn't abstract yourself from imperative programming (python) into logic programming (prolog).
Performance, far better performance. Same reason you ever use SQL. Prolog can do the same thing for very specific problems.
PS Prolog is a Horn clause solver. You characterizing it as a query language for a graph database, well it doesn't put you in the best light. It makes it seem like you don't understand important foundational CS math concepts.
I have no idea why are you dissing query languages. Software that makes those work is immensely complex and draws on a ton of CS math concepts and practical insights. But maybe you don't understand that.
I'm using SQL to do SQL things. And I'm sure when I somehow encounter the 1% of problems that prolog is the right fit for I'd be delighted to use it. However doing general algorithms in Prolog is as misguided as in SQL.
> I have no idea why are you dissing query languages.
I'm not. I'm pointing out that saying a Horn clause interpreter is a graph query language indicates a fundamental misunderstanding on your part. Prolog handles anything you want to say in formal logic very well (at the cost of not doing anything else well).
SQL on the other handle uses a completely different mathematical framework (relational algebra and set theory). This allows really effective optimization and query planning on top of a DB kernel.
A graph DB query language on the other hand should be based upon graph theory. Which is another completely different mathematical model. I haven't been impressed by the work in this area. I find these languages are too often dialects of SQL instead of a completely different thing based upon the correct mathematical model.
PS I used to write DBs. Discretion is the better part of valor here.
I remember writing a Prolog(ish) interpreter in Common Lisp in an 90's AI course in grad school for Theorem proving (which is essentially what Prolog is doing under the hood). Really foundational to my understanding of how declarative programming works. In an ideal world I would still be programming in Lisp and using Prolog tools.
Speaking as someone who just started exploring Prolog and lisp, and ended up in the frozen north isolated from internet - access. The tools were initially locked/commercial only during a critical period, and then everyone was oriented around GUIs - and GUI environments were very hostile to the historical tools, and thus provided a different kind of access barrier.
A side one is that the LISP ecology in the 80s was hostile to "working well with others" and wanted to have their entire ecosystem in their own image files. (which, btw, is one of the same reasons I'm wary of Rust cough)
Really, it's only become open once more with the rise of WASM, systemic efficiency of computers, and open source tools finally being pretty solid.
I can tell you, from the year 2045, that running the worlds global economy on Javascript was the direct link to the annihilation of most of our freedom and existence. Hope this helps.
It is not nostalgia. It is mathematical thought. It is more akin to to an equation and more provably correct. Closer to fundamental truth -- like touching fundamental reality.
I remember a project I did in undergrad with Prolog that would fit connecting parts of theoretical widgets together based on constraints about how different pieces could connect and it just worked instantly and it felt like magic because I had absolutely no clue how I would have coded that in Pascal or COBOL at that time. It blew my mind because the program was so simple.
Prolog is easily one of my favorite languages, and as many others in this thread, I first encountered it during university. I ended up teaching it for a couple of years (along with Haskell) and ever since, I've gone on an involuntary prolog bender of sorts once or twice a year. I almost always use it for Advent of code as well.
Declarative languages are fantastic to reason about code.
But the true power is unlocked once the underlying libraries are implemented in a way that surpassesthe performance that a human can achieve.
Since implementation details are hidden, caches and parallelism can be added without the programmer noticing anything else than a performance increase.
This is why SQL has received a boost the last decade with massively parallel implementations such as BigQuery, Trino and to some extent DuckDB. And what about adding a CUDA backend?
But all this comes at a cost and needs to be planned so it is only used when needed.
Because its more powerful than MongoDb or Fortran. The cut operator for instance gives it the ability to express things you just can't do in those other systems. The trade-off is that mastering the cut operator is a rare skill and only that one person who can do it can maintain the Prolog code. Compare that with MongoDb where even the village idiot can use it but with a huge performance cost.
I don't know about MongoDB and its query language, but wrt Fortran, it's unreasonable to say that Prolog is more powerful than Fortran (or vice versa). A more reasonable statement is that Prolog is more expressive than Fortran (though this gets fuzzy, we have to define expressiveness in a way that lets us rank languages). But the power of a language normally means what we can compute using that language. Prolog and Fortran both have the same level of "power", but it's certainly fair to say that expressing many programs is easier in Prolog than Fortran, and there are some (thinking back to my scientific computing days) that are easier to express in Fortran than Prolog.
I would say most programs are easier in Fortran. But there are things you can't express in Fortran but you can in Prolog. There is nothing like the cut operator in Fortran for example. They are very different animals.
You seem to be confusing two different things: What is easily or natively expressed in the language, and what can be expressed in the language.
You can create a logical equivalent of the cut operator in Fortran if you wanted to, but there's no native mechanism or operator to rely on. The languages possess the same computing "power", the difference is not in what they can compute which is your claim with "there are things you can't express in Fortran but you can in Prolog" (utter nonsense). Anything you can get a Prolog program to do, you can get a Fortran program to do (and vice versa).
> You can create a logical equivalent of the cut operator in Fortran if you wanted to
In isolation, no you can't. You could implement a Prolog interpreter in Fortran however. And if you did that, you would be able to write a cut operator because then you are interacting directly with Prolog's machinery. Part of the definition of the cut operator involves changing how code around it behaves. You can't do this with Fortran (or other languages) normally. Then there is the entire concept of backtracking with isn't native in any other language (that I know of).
You could probably make a very poor cut operator in a language with an Any/Object type and casting but why would you. You are not wrong about the math. But you are ignoring the absurd amount of code you would have to write to do it. Its a bit hand-wavy to say because you can implement Prolog in a language, its just as powerful. Although that is mathematically correct but in practice it really isn't.
I think cut operator doesn't make sense for any other language because prolog doesn't execude code linerily. It executes it with depth first search with backtracking. Only when you have a thing that walks the tree it makes sense to have a cut operator that prevents backtracking at some spots.
I don't think the person you responded to knows what the cut operator is, or they wouldn't have written any of their nonsense comments. They seem to think that it's some magical thing and not, as you wrote, a way to stop backtracking from going back through some point. You can implement that in any appropriate search system in any language. It might not be an operator, but it would carry the same meaning and effect.
“A touch! A distinct touch!” cried Holmes. "You are developing a certain unexpected vein of pawky humour, Watson, against which I must learn to guard myself".
-- from "The Valley of Fear" by Arthur Conan Doyle.
I recently asked @grok about Prolog being useless incomprehensible shit for anything bigger than one page:
Professionals write Prolog by focusing on the predicates and relations and leaving the execution flow to the interpreter. They also use the Constraint Logic Programming extensions (like clpfd) which use smart, external algorithms to solve problems instead of relying on Prolog's relatively "dumb" brute-force search, which is what typically leads to the "exploding brain" effect in complex code.
--- Worth mentioning here is that I wrote Prolog all on my own in 1979. On top of Nokolisp of course. There was no other functioning Prolog at that time I knew about.
Thereafter I have often planned "Infinity-Prolog" which can solve impossible problem with lazy evaluation.
I just learned from @grok that this Constraint Logic is basically what was aiming at.
I really enjoyed learning Prolog in university, but it is a weird language. I think that 98% of tasks I would not want to use Prolog for, but for that remaining 2% of tasks it's extremely well suited for. I have always wished that I could easily call Prolog easily from other languages when it suited the use case, however good luck getting most companies to allow writing some code in Prolog.
That is where Lisp or Scheme weirdly shines. It is incredibly easy to add prolog to a Lisp or a Scheme. It’s almost as if it comes out naturally if you just go down the rabbit hole.
“The little prover” is a fantastic book for that. The whole series is.
One can of course add the same stuff to other languages in form of libraries and stuff, but lisp/scheme make it incredibly easy to make it look like part of the language itself and make seem a mere extension of the language. So you can have both worlds if you want to. Lisp/scheme is not dead.
In fact, in recent years people have started contributing again and are rediscovering the merits.
Racket really shines in this regard: Racket makes it easy to build little DSLs, but they all play perfectly together because the underlying data model is the same. Example from the Racket home page: https://racket-lang.org/#any-syntax
You can have a module written in the `#racket` language (i.e., regular Racket) and then a separate module written in `#datalog` and the two can talk to each other!
I love Prolog, and have seen so many interesting use cases for it.
In the end though, it mostly just feels enough of a separate universe to any other language or ecosystem I'm using for projects that there's a clear threshold for bringing it in.
If there was a really strong prolog implementation with a great community and ecosystem around, in say Python or Go, that would be killer. I know there are some implementations, but the ones I've looked into seem to be either not very full-blown in their Prolog support, or have close to non-existent usage.
"Sometimes, when you introduce Prolog in an organization, people will dismiss the language because they have never heard of anyone who uses it. Yet, a third of all airline tickets is handled by systems that run SICStus Prolog. NASA uses SICStus Prolog for a voice-controlled system onboard the International Space Station. Windows NT used an embedded Prolog interpreter for network configuration. New Zealand's dominant stock broking system is written in Prolog and CHR. Prolog is used to reason about business grants in Austria."
Some other notable real projects using Prolog are TerminusDB, the PLWM tiling window manager, GeneXus (which is a kind of a low-code platform that generated software from your requirements before LLMs were a thing), the TextRazor scriptable text-mining API. I think this should give you a good idea of what "Prolog-shaped" problems look like in the real world.
Others have more complete answers, but the value for me of learning Prolog (in college) was being awakened to a refreshingly different way of expressing a program. Instead of saying "do this and this and this", you say "here's what it would mean for the program to be done".
At work, I bridged the gap between task tracking software and mandatory reports (compliance, etc.). Essentially, it handles distributing the effective working hours of workers across projects, according to a varied and very detailed set of constraints (people take time off, leave the company and come back, sick days, different holidays for different remote workers, folks work on multiple stuff at the same time, have gaps in task tracking, etc.).
In the words of a colleague responsible for said reports it 'eliminated the need for 50+ people to fill timesheets, saves 15 min x 50 people x 52 weeks per year'
It has been (and still is) in use for 10+years already. I'd say 90% of the current team members don't even know the team used to have to "punch a clock" or fill timesheets way back.
Any kind of problem involving the construction, search or traversal of graphs of any variety from cyclic semi-directed graphs to trees, linear programming, constraint solving, compilers, databases, formal verification of any kind not just theorem proving, computational theory, data manipulation, and in general anything.
It looks like that is in reference to the embedded interactive code blocks. If you use uBlock Origin you can use the element picker to remove the annoying image.
Python wins out in the versatility conversation because of its ecosystem, I'm still kinda convinced that the language itself is mid.
Prolog has many implementations and you don't have the same wealth of libraries, but yes, it's Turing complete and not of the "Turing tarpit" variety, you could reasonably write entire applications in SWI-Prolog.
Right, Python is usually the second-best choice for a language for any problem --- arguably the one thing it is best at is learning to program (in Python) --- it wins based on ease-of-learning/familiarity/widespread usage/library availability.
Personally I find Python more towards the bottom of the list with me, despite being the language I learned on. Especially if the code involved is "pythonic". Just doesn't jive with my neurochemistry. All the problems of C++ with much greater ambiguity, and I've never really been impressed with the library ecosystem. Yeah there's a lot, but just like with node it's just a mountain of unusably bad crap.
I think lua is the much better language for a wide variety of reasons (Most of the good Python libraries are just wrappers around C libraries, which is necessary because Python's FFI is really substandard), but I wouldn't reach for python or lua if I'm expecting to write more than 1000 lines of code. They both scale horribly.
I don't know if I would say its second-best. It just happened to get really popular because it has relatively easy syntax, and Numpy is a really great library making all of those scientific packages that people were using Fortran and C++ for before available in an easier language. This boosted the language, right when data science became a thing, right when dynamic programming became popular, right when there was a boost in Learn 2 Code forget about learning fundamentals was a thing. Its an okay language I guess, but I really think it was lucky that Numpy exists and Numby or Numphp.
That's not why Python is popular. Python is popular because universities don't provide technical support to researchers (which they should). So those researchers picked up the scripting language the sysops in the univ clusters were using. Those same researchers left academia but never learned any CS or other programming languages. Instead they used the 'if all you have is a hammer, everything is a nail' logic and used Python to glue together libraries, mostly written in C.
PS The big companies that actually make the LLMs, don't use Python (anymore). Its a lousy language for ML/AI. Its designed to script Linux GUIs and automate tasks. Its started off as a Perl replacement afterall. And this isn't a slight on the folks who write Python itself. It is a problem for all the folks who insist on slamming it into all sorts of places that it isn't well suited because they won't learn any CS.
FWIK; You can't compare the two. Python is far more general and larger than Prolog which is more specialized. However there have been various extensions to Prolog to make it more general. See Extensions section in Prolog wikipedia page - https://en.wikipedia.org/wiki/Prolog#Extensions Eg. Prolog++ - https://en.wikipedia.org/wiki/Prolog%2B%2B to allow one to do large-scale OO programming with Prolog.
Earlier, Prolog was used in AI/Expert Systems domains. Interestingly it was also used to model Requirements/Structured Analysis/Structured Design and in Prototyping. These usages seems interesting to me since there might be a way to use these techniques today with LLMs to have them generate "correct" code/answers.
In theory, it's as versatile as Python et al[0] but if you're using it for, e.g., serving bog-standard static pages over HTTP, you're very much using an industrial power hammer to apply screws to glass - you can probably make it work but people will look at you funny.
[0] Modulo that Python et al almost certainly have order(s) of magnitude more external libraries etc.
It's a language that should have just been a library. There's nothing noteworthy about it and it's implementable in any working language. Sometimes quite neatly. Schelog is a famous example.
Do you mean Northern Conservative Baptist Great Lakes Region Council of 1879 standard Prolog?[2]
SWI Prolog (specifically, see [2] again) is a high level interpreted language implemented in C, with an FFI to use libraries written in C[1], shipping with a standard library for HTTP, threading, ODBC, desktop GUI, and so on. In that sense it's very close to Python. You can do everyday ordinary things with it, like compute stuff, take input and output, serve HTML pages, process data. It starts up quickly, and is decently performant within its peers of high level GC languages - not v8 fast but not classic Java sluggish.
In other senses, it's not. The normal Algol-derivative things you are used to (arithmetic, text, loops) are clunky and weird. It's got the same problem as other declarative languages - writing what you want is not as easy as it seemed like it was going to be, and performance involves contorting your code into forms that the interpreter/compiler is good with.
It's got the problems of functional languages - everything must be recursion. Having to pass the whole world state in and out of things. Immutable variables and datastructures are not great for performance. Not great for naming either, temporary variable names all over.
It's got some features I've never seen in other languages - the way the constraint logic engine just works with normal variables is cool. Code-is-data-is-code is cool. Code/data is metaprogrammable in a LISP macro sort of way. New operators are just another predicate. Declarative Grammars are pretty unique.
The way the interpreter will try to find any valid path through your code - the thing which makes it so great for "write a little code, find a solution" - makes it tough to debug why things aren't working. And hard to name things, code doesn't do things it describes the relation of states to each other. That's hard to name on its own, but it's worse when you have to pass the world state and the temporary state through a load of recursive calls and try to name that clearly, too.
It's a recursive countdown. There's no deliberate typos in it, but it won't work. The reason why is subtle - that code is doing something you can't do as easily in Python. It's passing a Prolog source code expression of X-1 into the recursive call, not the result of evaluating X-1 at runtime. That's how easy metaprogramming and code-generation is! That's why it's a fun language! That's also how easy it is to trip over "the basics" you expect from other languages.
It's full of legacy, even more than Python is. It has a global state - the Prolog database - but it's shunned. It has two or three different ways of thinking about strings, and it has atoms. ISO Prolog doesn't have modules, but different implementations of Prolog do have different implementations of modules. Literals for hashtables are contentious (see [2] again). Same for object orientation, standard library predicates, and more.
And for some cases it's easier to understand if you write the backtracking yourself, and can edit/debug it. That is in case you write readable code professionally, as such algorhythms are not very intuitive for a person who sees it first time.
You might have forgotten the language but I bet it must have had some influence on how you think or write programs today. I don’t think the value of learning Prolog is necessarily that you can then write programs in Prolog, but that it shifts your perspective and adds another dimension to how you approach problems. At least this is what it has done for me and I find that still valuable today.
It means that timonoko doesn't like to think and would rather ask grok to think for them and post weird comments about it here on HN. They've been doing this for a while.
I am once again shilling the idea that someone should find a way to glue Prolog and LLMs together for better reasoning agents.
https://news.ycombinator.com/context?id=43948657
Thesis:
1. LLMs are bad at counting the number of r's in strawberry.
2. LLMs are good at writing code that counts letters in a string.
3. LLMs are bad at solving reasoning problems.
4. Prolog is good at solving reasoning problems.
5. ???
6. LLMs are good at writing prolog that solves reasoning problems.
Common replies:
1. The bitter lesson.
2. There are better solvers, ex. Z3.
3. Someone smart must have already tried and ruled it out.
Successful experiments:
1. https://quantumprolog.sgml.net/llm-demo/part1.html
> "4. Prolog is good at solving reasoning problems."
Plain Prolog's way of solving reasoning problems is effectively:
You hard code some options, write a logical condition with placeholders, and Prolog brute-forces every option in every placeholder. It doesn't do reasoning.Arguably it lets a human express reasoning problems better than other languages by letting you write high level code in a declarative way, instead of allocating memory and choosing data types and initializing linked lists and so on, so you can focus on the reasoning, but that is no benefit to an LLM which can output any language as easily as any other. And that might have been nice compared to Pascal in 1975, it's not so different to modern garbage collected high level scripting languages. Arguably Python or JavaScript will benefit an LLM most because there are so many training examples inside it, compared to almost any other langauge.
>> You hard code some options, write a logical condition with placeholders, and Prolog brute-forces every option in every placeholder. It doesn't do reasoning.
SLD-Resolution with unification (Prolog's automated theorem proving algorithm) is the polar opposite of brute force: as the proof proceeds, the cardinality of the set of possible answers [1] decreases monotonically. Unification itself is nothing but a dirty hack to avoid having to ground the Herbrand base of a predicate before completing a proof; which is basically going from an NP-complete problem to a linear-time one (on average).
Besides which I find it very difficult to see how a language with an automated theorem prover for an interpreter "doesn't do reasoning". If automated theorem proving is not reasoning, what is?
___________________
[1] More precisely, the resolution closure.
> "as the proof proceeds, the cardinality of the set of possible answers [1] decreases"
In the sense that it cuts off part of the search tree where answers cannot be found?
will never do the slow_computation - but if it did, it would come up with the same result. How is that the polar opposite of brute force, rather than an optimization of brute-force?If a language has tail call optimization then it can handle deeper recursive calls with less memory. Without TCO it would do the same thing and get the same result but using more memory, assuming it had enough memory. TCO and non-TCO aren't polar opposites, they are almost the same.
Rather, in the sense that during a Resolution-refutation proof, every time a new Resolution step is taken, the number of possible subsequent Resolution steps either gets smaller or stays the same (i.e. "decreases monotonically"). That's how we know for sure that if the proof is decidable there comes a point at which no more Resolution steps are left, and either the empty clause is all that remains, or some non-empty clause remains that cannot be reduced further by Resolution.
So basically Resolution gets rid of more and more irrelevant ...stuff as it goes. That's what I mean that it's "the polar opposite of brute force". Because it's actually pretty smart and it avoids doing the dumb thing of having to process all the things all the time before it can reach a conclusion.
Note that this is the case for Resolution, in the general sense, not just SLD-Resolution, so it does not depend on any particular search strategy.
I believe SLD-Resolution specifically (which is the kind of Resolution used in Prolog) goes much faster, first because it's "[L]inear" (i.e. in any Resolution step one clause must be one of the resolvents of the last step) and second because it's restricted to [D]efinite clauses and, as a result, there is only one resolvent at each new step and it's a single Horn goal so the search (of the SLD-Tree) branches in constant time.
Refs:
J. Alan Robinson, "A computer-oriented logic based on the Resolution principle" [1965 paper that introduced Resolution]
https://dl.acm.org/doi/10.1145/321250.321253
Robert Kowalski, "Predicate Logic as a Programming Language"
https://www.researchgate.net/publication/221330242_Predicate... [1974 paper that introduced SLD-Resolution]
I don't want to keep editing the above comment, so I'm starting a new one.
I really recommend that anyone with an interest in CS and AI read at least J. Alan Robinson's paper above. For me it really blew my mind when I finally found the courage to do it (it's old and a bit hard to read). I think there's a trope in wushu where someone finds an ancient scroll that teaches them a long-lost kung-fu and they become enlightened? That's how I felt when I read that paper, like I gained a few levels in one go.
Resolution is a unique gem of symbolic AI, one of its major achievements and a workhorse: used not only in Prolog but also in one of the two dominant branches of SAT-Solving (i.e. the one that leads from Hillary-Putnam to Conflict Driven Clause Learning) and even in machine learning, in of the two main branches of Inductive Logic Programming (which I study) and which is based on trying to perform induction by inverting deduction and so by inverting Resolution. There's really an ocean of knowledge that flows never-ending from Resolution. It's the bee's knees and the aardvark's nightgown.
I sincerely believe that the reason so many CS students seem to be positively traumatised by their contact with Prolog is that the vast majority of courses treat Prolog as any other programming language and jump straight to the peculiarities of the syntax and how to code with it, and completely fail to explain Resolution theorem proving. But that's the whole point of the language! What they get instead is some lyrical waxing about the "declarative paradigm", which makes no sense unless you understand why it's even possible to let the computer handle the control flow of your program while you only have to sort out the logic. Which is to say: because FOL is a computational paradigm, not just an academic exercise. No wonder so many students come off those courses thinking Prolog is just some stupid academic faffing about, and that it's doing things differently just to be different (not a strawman- actual criticism that I've heard).
In this day and age where confusion reigns about what even it means to "reason", it's a shame that the answer, that is to be found right there, under our noses, is neglected or ignored because of a failure to teach it right.
Excellent and Informative comments !
The way to learn a language is not via its syntax but by understanding the computation model and the abstract machine it is based on. For imperative languages this is rather simple and so we can jump right in and muddle our way to some sort of understanding. With Functional languages it is much harder (you need to know logic of functions) and is quite impossible with Logic languages (you need to know predicate logic) Thus we need to first focus on the underlying mathematical concepts for these categories of languages.
The Robert Kowalski paper Predicate Logic as a Programming Language you list above is the Rosetta stone of logic languages and an absolute must-read for everybody. It builds everything up from the foundations using implication (in disjunctive form), clause, clausal sentence, semantics, Horn clauses and computation (i.e. resolution derivation); all absolutely essential to understanding! This is the "enlightenment scroll" of Prolog.
I don't understand (the point of) your example. In all branches of the search `X > 5` will never be `true` so yeah `slow_computation` will not be reached. How does that relate to your point of it being "brute force"
>> but if it did, it would come up with the same result
Meaning either changing the condition or the order of the clauses. How do you expect Prolog to proceed to `slow_computation` when you have declared a statement (X > 5) that is always false before it.
The point is to compare a) evaluate all three lines (member, >5, slow_computation) then fail because the >5 test failed; against b) evaluate (member, >5) then fail. And to ask whether that's the mechanism YeGoblynQueyne is referring to. If so, is it valid to describe b as "the polar opposite" of a? They don't feel like opposites, merely an implementation detail performance hack. We can imagine some completely different strategy such as "I know from some other Constraint Logic propagation that slow_computation has no solutions so I don't even need to go as far as the X>5 test" which is "clever" not "brute".
> "How do you expect Prolog to proceed to `slow_computation` when you have declared a statement (X > 5) that is always false before it"
I know it doesn't, but there's no reason why it can't. In a C-like language it's common to do short-circuit Boolean logic evaluation like:
and if the first AND fails, the second is not tested. But if the language/implementation doesn't have that short-circuit optimisation, both tests are run, the outcome doesn't change. The short-circuit eval isn't the opposite of the full eval. And yes this is nitpicking the term "polar opposite of" but that's the relevant bit about whether something is clever or brute - if you go into every door, that's brute. If you try every door and some are locked, that's still brute. If you see some doors have snow up to them and you skip the ones with no footprints, that's completely different.Prolog was introduced to capture natural language - in a logic/symbolic way that didn't prove as powerful as today's LLM for sure, but this still means there is a large corpus of direct English to Prolog mappings available for training, and also the mapping rules are much more straightforward by design. You can pretty much translate simple sentences 1:1 into Prolog clauses as in the classic boring example
This is being taken advantage of in Prolog code generation using LLMs. In the Quantum Prolog example, the LLM is also instructed not to generate search strategies/algorithms but just planning domain representation and action clauses for changing those domain state clauses which is natural enough in vanilla Prolog.The results are quite a bit more powerful, close to end user problems, and upward in the food chain compared to the usual LLM coding tasks for Python and JavaScript such as boilerplate code generation and similarly idiosyncratic problems.
"large corpus" - large compared to the amount of Python on Github or the amount of JavaScript on all the webpages Google has ever indexed? Quantum Prolog doesn't have any relevant looking DuckDuckGo results, I found it in an old comment of yours here[1] but the link goes to a redirect which is blocked by uBlock rules and on to several more redirects beyond which I didn't get to a page. In your linked comment you write:
> "has convenient built-in recursive-decent parsing with backtracking built-in into the language semantics, but also has bottom-up parsing facilities for defining operator precedence parsers. That's why it's very convenient for building DSLs"
which I agree with, for humans. What I am arguing is that LLMs don't have the same notion of "convenient". Them dumping hundreds of lines of convoluted 'unreadable' Python (or C or Go or anything) to implement "half of Common Lisp" or "half of a Prolog engine" for a single task is fine, they don't have to read it, and it gets the same result. What would be different is if it got a significantly better result, which I would find interesting but haven't seen a good reason why it would.
[1] https://news.ycombinator.com/item?id=40523633
This sparked a really fascinating discussion, I don't know if anyone will see this but thanks everyone for sharing your thoughts :)
I understand your point - to an LLM there's no meaningful difference between once turing complete language and another. I'll concede that I don't have a counter argument, and perhaps it doesn't need to be prolog - though my hunch is that LLM's tend to give better results when using purpose built tools for a given type of problem.
The only loose end I want to address is the idea of "doing reasoning."
This isn't an AGI proposal (I was careful to say "good at writing prolog") just an augmentation that (as a user) I haven't yet seen applied in practice. But neither have I seen it convincingly dismissed.
The idea is the LLM would act like an NLP parser that gradually populates a prolog ontology, like building a logic jail one brick at a time.
The result would be a living breathing knowledge base which constrains and informs the LLM's outputs.
The punchline is that I don't even know any prolog myself, I just think it's a neat idea.
Its a Horn clause resolver...that's exactly the kind of reasoning that LLMs are bad at. I have no idea how to graft Prolog to an LLM but if you can graft any programming language to it, you can graft Prolog more easily.
Also, that you push Python and JavaScript makes me think you don't know many languages. Those are terrible languages to try to graft to anything. Just because you only know those 2 languages doesn't make them good choices for something like this. Learn a real language Physicist.
> Also, that you push Python and JavaScript
I didn't push them.
> Those are terrible languages to try to graft to anything.
Web browsers, Blender, LibreOffice and Excel all use those languages for embedded scripting. They're fine.
> Just because you only know those 2 languages doesn't make them good choices for something like this.
You misunderstood my claim and are refuting something different. I said there is more training data for LLMs to use to generate Python and JavaScript, than Prolog.
I'm not. Python and JS are scripting languages. And in this case, we want something that models formal logic. We are hammering in a nail, you picked up a screwdriver and I am telling you to use a claw hammer.
What does this comment even mean? A claw hammer? By formal definitions, all 3 languages are Turing complete and can express programs of the same computational complexity.
> By formal definitions, all 3 languages are Turing complete and can express programs of the same computational complexity.
So is Brainfuck.
Turing equivalence does not imply that languages are equally useful choices for any particular application.
But we kinda don't use python for a database query over sql do we?
I use an ORM every day.
> I have no idea how to graft Prolog to an LLM
Wrapping either the SWI prolog MQI, or even simpler an existing Python interface like like janus_swi, in a simple MCP is probably an easy weekend project. Tuning the prompting to get an LLM to reliably and effectively choose to use it when it would benefit from symbolic reasoning may be harder, though.
No call for talking down at people. No one has ever been convinced by being belittled.
We would begin by having a Prolog server of some kind (I have no idea if Prolog is parallelized but it should very well be if we're dealing with Horn Clauses).
There would be MCP bindings to said server, which would be accessible upon request. The LLM would provide a message, it could even formulate Prolog statements per a structured prompt, and then await the result, and then continue.
> Its a Horn clause resolver...that's exactly the kind of reasoning that LLMs are bad at. I have no idea how to graft Prolog to an LLM but if you can graft any programming language to it, you can graft Prolog more easily.
By grafting LLM into Prolog and not other way around ?
Of course it does "reasoning", what do you think reasoning is? From a quick google: "the action of thinking about something in a logical, sensible way". Prolog searches through a space of logical proposition (constraints) and finds conditions that lead to solutions (if one exists).
(a) Trying adding another 100 or 1000 interlocking proposition to your problem. It will find solutions or tell you one doesn't exist. (b) You can verify the solutions yourself. You don't get that with imperative descriptions of problems. (b) Good luck sandboxing Python or JavaScript with the treat of prompt injection still unsolved.
Of course it doesn't "do reasoning", why do you think "following the instructions you gave it in the stupidest way imaginable" is 'obviously' reasoning? I think one definition of reasoning is being able to come up with any better-than-brute-force thing that you haven't been explicitly told to use on this problem.
Prolog isn't "thinking". Not about anything, not about your problem, your code, its implementation, or any background knowledge. Prolog cannot reason that your problem is isomorphic to another problem with a known solution. It cannot come up with an expression transform that hasn't been hard-coded into the interpreter which would reduce the amount of work involved in getting to a solution. It cannot look at your code, reason about it, and make a logical leap over some of the code without executing it (in a way that hasn't been hard-coded into it by the programmer/implementer). It cannot reason that your problem would be better solved with SLG resolution (tabling) instead of SLD resolution (depth first search). The point of my example being pseudo-Python was to make it clear that plain Prolog (meaning no constraint solver, no metaprogramming), is not reasoning. It's no more reasoning than that Python loop is reasoning.
If you ask me to find the largest Prime number between 1 and 1000, I might think to skip even numbers, I might think to search down from 1000 instead of up from 1. I might not come up with a good strategy but I will reason about the problem. Prolog will not. You code what it will do, and it will slavishly do what you coded. If you code counting 1-1000 it will do that. If you code Sieve of Eratosthenes it will do that instead.
The disagreement you have with the person you are relying to just boils down to a difference in the definition of "reasoning."
Its a Horn clause interpreter. Maybe lookup what that is before commenting on it. Clearly you don't have a good grasp of Computer Science concepts or math based upon your comments here. You also don't seem to understand the AI/ML definition of reasoning (which is based in formal logic, much like Prolog itself).
Python and Prolog are based upon completely different kinds of math. The only thing they share is that they are both Turing complete. But being Turing complete isn't a strong or complete mathematical definition of a programming language. This is especially true for Prolog which is very different from other languages, especially Python. You shouldn't even think of Prolog as a programming language, think of it as a type of logic system (or solver).
None of that is relevant.
Contrary to what everyone else is saying, I think you're completely correct. Using it for AI or "reasoning" is a hopeless dead end, even if people wish otherwise. However I've found that Prolog is an excellent language for expressing certain types of problems in a very concise way, like parsers, compilers, and assemblers (and many more). The whole concept of using a predicate in different modes is actually very useful in a pragmatic way for a lot of problems.
When you add in the constraint solving extensions (CLP(Z) and CLP(B) and so on) it becomes even more powerful, since you can essentially mix vanilla Prolog code with solver tools.
The reason why you can write parsers with Prolog is because you can cast the problem of determining whether a string belongs to a language or not as a proof, and, in Prolog, express it as a set of Definite Clauses, particularly with the syntactic sugar of Definite Clause Grammars that give you an executable grammar that acts as both acceptor and generator and is equivalent to a left-corner parser.
Now, with that in mind, I'd like to understand how you and the OP reconcile the ability to carry out a formal proof with the inability to do reasoning. How is it not reasoning, if you're doing a proof? If a proof is not reasoning, then what is?
Clearly people write parsers in C and C++ and Pascal and OCAML, etc. What does it mean to come in with "the reason you can write parsers with Prolog..."? I'm not claiming that reason is incorrect, I'm handwaving it away as irrelevant and academic. Like saying that Lisp map() is better than Python map() because Lisp map is based on formal Lambda Calculus and Python map is an inferior imitation for blub programmers. When a programmer maps a function over a list and gets a result, it's a distinction without a difference. When a programmer writes a getchar() peek() and goto state machine parser with no formalism, it works, what difference does the formalism behind the implementation practically make?
Yes maybe the Prolog way means concise code is easier for a human to tell whether the code is a correct expression of the intent, but an LLM won't look at it like that. Whatever the formalism brings, it isn't enough that every parser task is done in Prolog in the last 50 years. Therefore it isn't any particular interest or benefit, except academic.
> both acceptor and generator
Also academically interesting but practically useless due to the combinatorial explosion of "all possible valid grammars" after the utterly basic "aaaaabbbbbbbbbbbb" examples.
> "how you and the OP reconcile the ability to carry out a formal proof with the inability to do reasoning. How is it not reasoning, if you're doing a proof? If a proof is not reasoning, then what is?"
If drawing a painting is art, is it art if a computer pulls up a picture of a painting and shows it on screen? No. If a human coded the proof into a computer, the human is reasoning, the computer isn't. If the computer comes up with the proof, the computer is reasoning. Otherwise you're in a situation where dominos falling over is "doing reasoning" because it can be expressed formally as a chain of connected events where the last one only falls if the whole chain is built properly, and that's absurdum.
> If a human coded the proof into a computer, the human is reasoning, the computer isn't. ... If the computer comes up with the proof, the computer is reasoning.
That is exactly what "formal logic programming" is all about. The machine is coming up with the proof for your query based on the facts/rules given by you. Therefore it is a form of reasoning.
Reasoning (cognitive thinking) is expressed as Arguments (verbal/written premises-to-conclusions) a subset of which are called Proofs (step-by-step valid arguments). Using Formalization techniques we have just pushed some of those proof derivations to a machine.
I pointed this out in my other comment here https://news.ycombinator.com/item?id=45911177 with some relevant links/papers/books.
See also Logical Formalizations of Commonsense Reasoning: A Survey (from the Journal of Artificial Intelligence Research) - https://jair.org/index.php/jair/article/view/11076
With Prolog, the proof is carried out by the computer, not a human. A human writes up a theory and a theorem and the computer proves the theorem with respect to the theory. So I ask again, how is carrying out a proof not reasoning?
>> I'm not claiming that reason is incorrect, I'm handwaving it away as irrelevant and academic.
That's not a great way to have a discussion.
The word "reason" came into this thread with the original comment:
I agree with you. In Prolog "?- 1=1." is reasoning by definition. Then 4. becomes "LLMs should emit Prolog because Prolog is good at executing Prolog code".I think that's not a useful place to be, so I was trying to head off going there. But now I'll go with you - I agree it IS reasoning - can you please support your case that "executing Prolog code is reasoning" makes Prolog more useful for LLMs to emit than Python?
This is not my claim:
>> "executing Prolog code is reasoning" makes Prolog more useful for LLMs to emit than Python?
I said what I think about LLMs generating Prolog here:
https://news.ycombinator.com/item?id=45914587
But I was mainly asking why you say that Prolog's execution is "not reasoning". I don't understand what you mean that '"?- 1=1." is reasoning by definition' and how that ties-in with our discussion about Prolog reasoning or not.
"?- 1=1." is Prolog code. Executing Prolog code is reasoning. Therefore that is reasoning. Q.E.D. This is the point you refused to move on from until I agreed. So I agreed. So we could get back to the interesting topic.
A topic you had no interest in, only interest dragging onto a trangent and grinding it down to make ... what point, exactly? If "executing Prolog code" is reasoning, then what? I say it isn't useful to call it reasoning (in the context of this thread) because it's too broad to be a helpful definition, basically everything is reasoning, and almost nothing is not. When I tried to say in advance that this wouldn't be a useful direction and I didn't want to go here, you said it was " not a great way to have a discussion". And now having dragged me off onto this academic tangent, you dismiss it as "I wasn't interested in that other topic anyway". Annoying.
I'm sorry you find my contribution to the discussion annoying, but how should I feel if you just "agree" with me as a way to get me to stop arguing?
But I think your annoyance may be caused by misunderstanding my argument. For example:
>> If "executing Prolog code" is reasoning, then what? I say it isn't useful to call it reasoning (in the context of this thread) because it's too broad to be a helpful definition, basically everything is reasoning, and almost nothing is not.
Everything is not reasoning, nor is executing any code reasoning, but "executing Prolog code" is, because executing Prolog code is a special case of executing code. The reason for that is that Prolog's interpreter is an automated theorem prover, therefore executing Prolog code is carrying out a proof; in an entirely literal and practical sense, and not in any theoretical or abstract sense. And it is very hard to see how carrying out a proof automatically is "not reasoning".
I made this point in my first comment under yours, here:
https://news.ycombinator.com/item?id=45909159
The same clearly does not apply to Python, because its interpreter is not an automated theorem prover; it doesn't apply to javascript because its interpreter is not an automated theorem prover; it doesn't apply to C because its compiler is not an automated theorem prover; and so on, and so forth. Executing code in any of those languages is not reasoning, except in the most abstract and, well, academic, sense, e.g. in the context of the Curry-Howard correspondence. But not in the practical, down-to-brass-tacks way it is in Prolog. Calling what Prolog does reasoning is not a definition of reasoning that's too broad to be useful, as you say. On the contrary, it's a very precise definition of reasoning that applies to Prolog but not to most other programming languages.
I think you misunderstand this argument and as a consequence fail to engage with it and then dismiss it as irrelevant because you misunderstand it. I think you should really try to understand it, because it's obvious you have some strong views on Prolog which are not correct, and you might have the chance to correct them.
I absolutely have an interest in any claim that generating Prolog code with LLMs will fix LLMs' inability to reason. Prolog is a major part of my programming work and research.
> "?- 1=1." is Prolog code. Executing Prolog code is reasoning. Therefore that is reasoning. Q.E.D.
This is the dumbest thing i have read yet on HN. You are absolutely clueless about this topic and are merely arguing for argument's sake.
> If "executing Prolog code" is reasoning, then what? I say it isn't useful to call it reasoning (in the context of this thread) because it's too broad to be a helpful definition, basically everything is reasoning, and almost nothing is not.
What does this even mean? It has already been pointed out that Prolog does a specific type of formalized reasoning which is well understood. The fact that there are other formalized models to tackle subdomains of "Commonsense Reasoning" does not detract from the above. That is why folks are trying to marry Prolog (predicate logic) to LLMs (mainly statistical approaches) to get the best of both worlds.
User "YeGoblynQueenne" was being polite in his comments but for some reason you willfully don't want to understand and have come up with ridiculous examples and comments which only reflect badly on you.
You call it the dumbest thing you have ever read, and say that I know nothing - but you agree that it is a correct statement ("Prolog does a specific type of formalized reasoning").
> "What does this even mean?"
For someone who is so eager to call comments dumb, you sure have a lot of not-understanding going on.
1. Someone said "Prolog is good at reasoning problems"
2. I said it isn't any better than other languages.
3. Prolog people jumped on me because Ackchually Technickally everything Prolog does is 'reasoning' hah gotcha!
4. I say that is entirely unrelated to the 'reasoning' in "Prolog is good at reasoning problems". I demonstrate this by reductio ad absurdum - if executing "?- 1=1." is "reasoning" then it's absurd for the person to be saying that definition is a compelling reason to use Prolog, therefore they were not saying that, therefore this whole tangent about whether some formalism is or isn't reasoning by some academic definition is irrelevant to the claim and counter claim.
> "are merely arguing for argument's sake."
Presumably you are arguing for some superior purpose?
The easiest way for you to change my mind is to demonstrate literally anything that is better for an LLM to emit in Prolog than Python - given the condition that LLMs don't have to care about conciseness or expressivity or readability in the same way humans do. For one example I say it would no better for an LLM to solve an Einstein Puzzle one way or the other. The fact that you can't or won't do this, and prefer insults, is not changing my mind nor is it educating me in anything.
You edited your comment without any indication tags which is dishonest. However, my previous response at https://news.ycombinator.com/item?id=45939440 is still valid. This is an addendum to that;
> The easiest way for you to change my mind is to demonstrate literally anything that is better for an LLM to emit in Prolog than Python
I have no interest in trying to change your mind since you simply do not have the first idea about what Prolog is doing vis-a-vis any other non-logic programming language. You have to have some basic knowledge before we can have a meaningful discussion.
However, in my previous comment here https://news.ycombinator.com/item?id=45712934 i link to some usecases from others. In particular; the casestudy from user "bytebach" is noteworthy and explains exactly what you are asking for.
> The fact that you can't or won't do this, and prefer insults, is not changing my mind nor is it educating me in anything.
This is your dishonest edit without notification. I refuse to suffer wilful stupidity and hence retorted in a pointed manner; that was the only way left to get the message across. We had given you enough data/pointers in our detailed comments none of which you seem to have even grasped nor looked into. In a forum like this, if we are to learn from each other, both parties must put forth effort to understand the other side and articulate one's own position clearly. You have failed on both counts in this thread.
> but you agree that it is correct.
No, i did not; do not twist nor misrepresent my words. Your example had nothing whatsoever to do with "Reasoning" and hence i called it dumb.
> you sure have a lot of not-understanding going on.
Your and my comments are there for all to see. Your comments are evidence that you are absolutely clueless on Reasoning, Logic Programming Approaches and Prolog.
> 1. Someone said "Prolog is good at reasoning problems"
Which is True. But it is up to you to present the world-view to Prolog in the appropriate Formal manner.
> 2. I said it isn't any better than other languages.
Which is stupid. This single statement establishes the fact that you know nothing about Logic Programming nor the aspect of Predicate Logic it is based on.
> 3. Prolog people jumped on me because Ackchually Technickally everything Prolog does is 'reasoning' hah gotcha!
Which is True and not a "gotcha". You have no definite understanding of what the word "Reasoning" means in the context of Prolog. We have explained concepts and pointed you to papers none of which you are interested in studying nor understanding.
> 4. I say that is entirely unrelated to the 'reasoning' in "Prolog is good at reasoning problems". I demonstrate this by reductio ad absurdum - if executing "?- 1=1." is "reasoning" then it's absurd for the person to be saying that definition is a compelling reason to use Prolog, therefore they were not saying that, therefore this whole tangent about whether some formalism is or isn't reasoning by some academic definition is irrelevant to the claim and counter claim.
What does this even mean? This is just nonsense verbiage.
> Presumably you are arguing for some superior purpose?
Yes. I am testing my understanding of Predicate Logic/Logic Programming/Prolog against others. Also whether others have come up with better ways of application in this era of LLMs i.e. what are the different ways to use Prolog with LLMs today?.
I initially thought you were probably wanting a philosophical discussion of what "Reasoning" means and hence pointed to some relevant articles/papers but i am now convinced you have no clue about this entire subject and are really making up stuff as you go.
You are wasting everybody's time, testing their patience and coming across as totally ignorant on this domain.
Even in your example (which is obviously not correct representation of prolog), that code will work X orders magnitude faster and with 100% reliability compared to much more inferior LLM reasoning capabilities.
This is not the point though
Algorithmically there's nothing wrong with using BFS/DFS to do reasoning as long as the logic is correct and the search space is constrained sufficiently. The hard part has always been doing the constraining, which LLMs seem to be rather good at.
> This is not the point though
could you expand what is the point? That authors opinion without much justification is that this is not reasoning?
What makes you think your brain isn't also brute forcing potential solutions subconciously and only surfacing the useful results?
Because I can solve problems that would take the age of the universe to brute force, without waiting the age of the universe. So can you: start counting at 1, increment the counter up to 10^8000, then print the counter value.
Prolog: 1, 2, 3, 4, 5 ...
You and me instantly: 10^8000
The brain can still use other means of working in addition to brute forcing solutions. For example, how would you go about solving the chess puzzle of eight queens that doesn't involve going through the potential positions and then filtering out the options that don't match the criteria for the solution?
Prolog can also evaluate mathematical expressions directly as well.
There's a whole lot of undecidable (or effectively undecidable) edge cases that can be adequately covered. As a matter of fact, Decidability Logic is compatible with Prolog.
Can you try calculating 101 * 70 in your head?
I think therefore I am calculator?
I can absolutely try this. Doesn't mean i'll solve it. If i solve it there's no guarantee i'll be correct. Math gets way harder when i don't have a legitimate need to do it. This falls in the "no legit need" so my mind went right to "100 * 70, good enough."
Or you could do (100 + 1)*70 => 100*70 + 1*70
Very easy to solve, just like it is easy to solve many other ones once you know the tricks.
I recommend this book: https://www.amazon.com/Secrets-Mental-Math-Mathemagicians-Ca...
Completely missing the point on purpose?
Elaborate.
you don’t solve it by brute forcing possible solutions until one sticks
Yeah, read it in another comment. Why do you think doing calculations in your head is brute-forcing? Many people can do it flawlessly, without even knowing of these "tricks". They just know. Is that brute-force?
[flagged]
You are not replying to what I said. I am not going to repeat myself, see the parent comment you replied to.
Your original comment completely missed the point of what it was replying to, you phrased it like you were correcting them but you were actually in agreement and didn't seem to realize it. They tried to clarify when you asked and when you responded you assumed they had the opposite viewpoint from what they actually have.
I replied to "Can you try calculating 101 * 70 in your head?".
Yes, many people can successfully calculate it through learned tricks or no learned tricks.
That is all I am saying. And my question is: how is calculating it in your own head brute-force, especially if done without such tricks?
No need to complicate it, answers to my question would suffice.
> how is calculating it in your own head brute-force
Doing this calculation in your head is not brute force, which was their entire point. Their math question was an example of why the brain isn't brute-forcing solutions, replying to this:
> What makes you think your brain isn't also brute forcing potential solutions subconciously and only surfacing the useful results?
Um, that's really easy to do in your head, there's no carrying or anything? 7,070
7 * 101 = 707 * 10 = 7,070
And computers don't brute-force multiplication either, so I'm not sure how this is relevant to the comment above?
I think it is very relevant, because no brute-forcing is involved in this solution.
That's not true, the 'brute force' part is searching for a shortcut that works.
The brute force got reduced down to fast heuristics, like Arthur Benjamin's Mathemagics.
It’s almost like you’re proving the point of his reply…
human brains are insanely powerful pattern matching and shortcut-taking machines. There's very little brute forcing going on.
Your second sentence contradicts your first.
Pray tell how it contradicts the first.
Just note: human pattern matching is not Haskell/Erlang/ML pattern matching. It doesn't go [1] through all possible matches of every possible combination of all available criteria
[1] If it does, it's the most powerful computing device imaginable.
I 100% agree with nutjob :|
There are hundreds of trillions of synapses in the brain, and much of what they do (IANANS) could reasonably be described as pattern matching: mostly sitting idle waiting for patterns. (Since dendritic trees perform a lot of computation (for example, combining inputs at each branch), if you want to count the number of pattern matchers in the branch you can't just count neurons. A neuron can recognise more than one pattern.)
So yes, thanks to its insanely parallel architecture, the brain is also an insanely brute force pattern matcher, constantly matching against who knows how many trillions of previously seen patterns. (BTW IMHO this is why LLMs work so well)
(I do recognise the gap in my argument: are all those neurons actually receiving inputs to match against, or are they 'gated'? But we're really just arguing about semantics of applying "brute force", a CS term, to a neural architecture, where it has no definition.)
> [1] If it does, it's the most powerful computing device imaginable.
Well, my brain perhaps. Not sure about the rest of y'all.
Just intuition ;)
Everything you've written here is an invalid over-reduction, I presume because you aren't terribly well versed with Prolog. Your simplification is not only outright erroneous in a few places, but essentially excludes every single facet of Prolog that makes it a turing complete logic language. What you are essentially presenting Prolog as would be like presenting C as a language where all you can do is perform operations on constants, not even being able to define functions or preprocessor macros. To assert that's what C is would be completely and obviously ludicrous, but not so many people are familiar enough with Prolog or its underlying formalisms to call you out on this.
Firstly, we must set one thing straight: Prolog definitionally does reasoning. Formal reasoning. This isn't debatable, it's a simple fact. It implements resolution (a computationally friendly inference rule over computationally-friendly logical clauses) that's sound and refutation complete, and made practical through unification. Your example is not even remotely close to how Prolog actually works, and excludes much of the extra-logical aspects that Prolog implements. Stripping it of any of this effectively changes the language beyond recognition.
> Plain Prolog's way of solving reasoning problems is effectively:
No. There is no cognate to what you wrote anywhere in how Prolog works. What you have here doesn't even qualify as a forward chaining system, though that's what it's closest to given it's somewhat how top-down systems work with their ruleset. For it to even approach a weaker forward chaining system like CLIPS, that would have to be a list of rules which require arbitrary computation and may mutate the list of rules it's operating on. A simple iteration over a list testing for conditions doesn't even remotely cut it, and again that's still not Prolog even if we switch to a top-down approach by enabling tabling.
> You hard code some options
A Prolog knowledgebase is not hardcoded.
> write a logical condition with placeholders
A horn clause is not a "logical condition", and those "placeholders" are just normal variables.
> and Prolog brute-forces every option in every placeholder.
Absolutely not. It traverses a graph proving things, and when it cannot prove something it backtracks and tries a different route, or otherwise fails. This is of course without getting into impure Prolog, or the extra-logical aspects it implements. It's a fundamentally different foundation of computation which is entirely geared towards formal reasoning.
> And that might have been nice compared to Pascal in 1975, it's not so different to modern garbage collected high level scripting languages.
It is extremely different, and the only reason you believe this is because you don't understand Prolog in the slightest, as indicated by the unsoundness of essentially everything you wrote. Prolog is as different from something like Javascript as a neural network with memory is.
The original suggestion was that LLMs should emit Prolog code to test their ideas. My reply was that there is nothing magic in Prolog which would help them over any other language, but there is something in other languages which would help them over Prolog - namely more training data. My example was to illustrate that, not to say Prolog literally is Python. Of course it's simplified to the point of being inaccurate, it's three lines, how could it not be.
> "A Prolog knowledgebase is not hardcoded."
No, it can be asserted and retracted, or consult a SQL database or something, but it's only going to search the knowledge the LLM told it to - in that sense there is no benefit to an LLM to emit Prolog over Python since it could emit the facts/rules/test cases/test conditions in any format it likes, it doesn't have any attraction to concise, clean, clear, expressive, output.
> "those "placeholders" are just normal variables"
Yes, just normal variables - and not something magical or special that Prolog has that other languages don't have.
> "Absolutely not. It traverses a graph proving things,"
Yes, though, it traverses the code tree by depth first walk. If the tree has no infinite left-recursion coded in it, that is a brute force walk. It proves things by ordinary programmatic tests that exist in other languages - value equality, structure equality, membership, expression evaluation, expression comparison, user code execution - not by intuition, logical leaps, analogy, flashes of insight. That is, not particularly more useful than other languages which an LLM could emit.
> "Your example is not even remotely close to how Prolog actually works"
> "There is no cognate to what you wrote anywhere in how Prolog works"
> "It is extremely different"
Well:
That's a loop over the people, filling in the variable X. Prolog is not looking at Ancestry.com to find who Timmy's parents are. It's not saying "ooh you have a SQLite database called family_tree I can look at". That it's doing it by a different computational foundation doesn't seem relevant when that's used to give it the same abilities.My point is that Prolog is "just" a programming language, and not the magic that a lot of people feel like it is, and therefore is not going to add great new abilities to LLMs that haven't been discovered because of Prolog's obscurity. If adding code to an LLM would help, adding Python to it would help. If that's not true, that would be interesting - someone should make that case with details.
> "and the only reason you believe this is because you don't understand Prolog in the slightest"
This thread would be more interesting to everybody if you and hunterpayne would stop fantasizing about me, and instead explain why Prolog's fundamentally different foundation makes it a particularly good language for LLMs to emit to test their other output - given that they can emit virtually endless quantities of any language, custom writing any amount of task-specific code on the fly.
The discussion has become contentious and that's very unfortunate because there's clearly some confusion about Prolog and that's always a great opportunity to learn.
You say:
>> Yes, though, it traverses the code tree by depth first walk.
Here's what I suggest: try to think what, exactly, is the data structure searched by Depth First Search during Prolog's execution.
You'll find that this structure is what we call and SLD-Tree. That's a tree where the root is a Horn goal that begins the proof (i.e. the thing we want to dis-prove, since we're doing a proof by refutation); every other node is a new goal derived during the proof; every branch is a Resolution step between one goal and one definite program clause from a Prolog program; and every leaf of a finite branch is either the empty clause, signalling the success of the proof by refutation, or a non-empty goal that can not be further reduced, which signals the failure of the proof. So that's basically a proof tree and the search is ... a proof.
So Prolog is not just searching a list to find an element, say. It's searching a proof tree to find a proof. It just so happens that searching a proof tree to find a proof corresponds to the execution of a program. But while you can use a search to carry out a proof, not every search is a proof. You have to get your ducks in a row the right way around otherwise, yeah, all you have is a search. This is not magick, it's just ... computer science.
It should go without saying that you can do the same thing with Python, or with javascript, or with any other Turing-complete language, but then you'd basically have to re-invent Prolog, and implement it in that other language; an ad-hoc, informally specified, bug-ridden and slow implementation of half of Prolog, most like.
This is all without examining whether you can fix LLMs' lack of reasoning by funneling their output through a Prolog interpreter. I personally don't think that's a great idea. Let's see, what was that soundbite... "intelligence is shifting the test part of generate-test into the generate part" [1]. That's clearly not what pushing LLM output into a Prolog interpreter achieves. Clearly, if good, old-fashioned symbolic AI has to be combined with statistical language modelling, that has to happen much earlier in the statistical language modelling process. Not when it's already done and dusted and we have a language model; which is only statistical. Like putting the bubbles in the soda before you serve the drink, not after, the logic has to go into the language modelling before the modelling is done, not after. Otherwise there's no way I can see that the logic can control the modelling. Then all you have is generate-and-test, and it's meh as usual. Although note that much recent work on carrying out mathematical proofs with LLMs does exactly that, e.g. like DeepMind's AlphaProof. Generate-and-test works, it's just dumb and inefficient and you can only really make it work if you have the same resources as DeepMind and equivalent.
_____________
[1] Marvin Minsky via Rao Kampampathi and students: https://arxiv.org/html/2504.09762v1
This is a philosophical argument.
The way to look at this is first to pin down what we mean when we say Human Commonsense Reasoning (https://en.wikipedia.org/wiki/Commonsense_reasoning). Obviously this is quite nebulous and cannot be defined precisely but OG AI researchers have a done a lot to identify and formalize subsets of Human Reasoning so that it can be automated by languages/machines.
See the section Successes in automated commonsense reasoning in the above wikipedia page - https://en.wikipedia.org/wiki/Commonsense_reasoning#Successe...
Prolog implements a language to logically interpret only within a formalized subset of human reasoning mentioned above. Now note that all our scientific advances have come from our ability to formalize and thus automate what was previously only heuristics. Thus if i were to move more of real-world heuristics (which is what a lot of human reasoning consists of) into some formal model then Prolog (or say LLMs) can be made to better reason about it.
See the paper Commonsense Reasoning in Prolog for some approaches - https://dl.acm.org/doi/10.1145/322917.322939
Note however the paper beautifully states at the end;
Prolog itself is all form and no content and contains no knowledge. All the tasks, such as choosing a vocabulary of symbols to represent concepts and formulating appropriate sentences to represent knowledge, are left to the users and are obviously domain-dependent. ... For each particular application, it will be necessary to provide some domain-dependent information to guide the program writing. This is true for any formal languages. Knowledge is power. Any formalism provides us with no help in identifying the right concepts and knowledge in the first place.
So Real-World Knowledge encoded into a formalism can be reasoned about by Prolog. LLMs claim to do the same on unstructured/non-formalized data which is untenable. A machine cannot do "magic" but can only interpret formalized/structured data according to some rules. Note that the set of rules can be dynamically increased by ML but ultimately they are just rules which interact with one another in unpredictable ways. Now you can see where Prolog might be useful with LLMs. You can impose structure on the view of the World seen by the LLM and also force it to confine itself only to the reasoning it can do within this world-view by asking it to do predominantly Prolog-like reasoning but you don't turn the LLM into just a Prolog interpreter. We don't know how it interacts with other heuristics/formal reasoning parts (eg. reinforcement learning) of LLMs but does seem to give better predictable and more correct output. This can then be iterated upon to get a final acceptable result.
PS: You might find the book Thinking and Deciding by Jonathan Baron useful for background knowledge - https://www.cambridge.org/highereducation/books/thinking-and...
This is my own recent attempt at this:
https://news.ycombinator.com/item?id=45937480
The core idea of DeepClause is to use a custom Prolog-based DSL together with a metainterpreter implemented in Prolog that can keep track of execution state and implicitly manage conversational memory for an LLM. The DSL itself comes with special predicates that are interpreted by an LLM. "Vague" parts of the reasoning chain can thus be handed off to a (reasonably) advanced LLM.
Would love to collect some feedback and interesting ideas for possible applications.
IIRC IBM’s Watson (the one that played Jeopardy) used primitive NLP (imagine!) to form a tree of factual relations and then passed this tree to construct Prolog queries that would produce an answer to a question. One could imagine that by swapping out the NLP part with an LLM, the model would have 1. a more thorough factual basis against which to write Prolog queries and 2. a better understanding of the queries it should write to get at answers (for instance, it may exploit more tenuous relations between facts than primitive NLP).
Not so "primitive" NLP. Watson started with what its team called a "shallow parse" of a sentence using a dependency grammar and then matched the parse to an ontology consisting of good, old fashioned frames [1]. That's not as "advanced" as an LLM but far more reliable.
I believe the ontology was indeed implemented in Prolog but I forget the architecture details.
______________
[1] https://en.wikipedia.org/wiki/Frame_(artificial_intelligence...
Please tell me that's approximately what Palantir Ontology is, because if it isn't, I've no idea what it could be.
https://www.palantir.com/docs/foundry/ontology/overview/
We've done this, and it works. Our setup is to have some agents that synthesize Prolog and other types of symbolic and/or probabilistic models. We then use these models to increase our confidence in LLM reasoning and iterate if there is some mismatch. Making synthesis work reliably on a massive set of queries is tricky, though.
Imagine a medical doctor or a lawyer. At the end of the day, their entire reasoning process can be abstracted into some probabilistic logic program which they synthesize on-the-fly using prior knowledge, access to their domain-specific literature, and observed case evidence.
There is a growing body of publications exploring various aspects of synthesis, e.g. references included in [1] are a good starting point.
[1] https://proceedings.neurips.cc/paper_files/paper/2024/file/8...
The next step is can in solve the Wicked Problems
https://en.wikipedia.org/wiki/Wicked_problem
I am once again shilling the idea that someone should find a way to glue Prolog and LLMs together for better reasoning agents.
There are definitely people researching ideas here. For my own part, I've been doing a lot of work with Jason[1], a very Prolog like logic language / agent environment with an eye towards how to integrate that with LLMs (and "other").
Nothing specific / exciting to share yet, but just thought I'd point out that there are people out there who see potential value in this sort of thing and are investigating it.
[1]: https://github.com/jason-lang/jason
Related: LLMs trained on "A is B" fail to learn "B is A"
https://arxiv.org/abs/2309.12288
You might find Eugene Asahara's detailed Prolog in the LLM Era series of about a dozen blog posts very useful - https://eugeneasahara.com/category/prolog-in-the-llm-era/
Prolog doesn't look like javascript or python so:
1. web devs are scared of it.
2. not enough training data?
I do remember having to wrestle to get prolog to do what I wanted but I haven't written any in ~10 years.
>>Prolog doesn't look like javascript or python so:
Think of this way. In Python and Javascript you write code, and to test if its correct you write unit test cases.
A prolog program is basically a bunch of test cases/unit test cases, you write it, and then tell the Prolog compiler, 'write code, that passes these test cases'.
That is, you are writing the program specification, or tests that if pass would represent solution to the problem. The job of the compiler to write the code that passes these test cases.
It's been a while since I have done web dev, but web devs back then were certainly not scared of any language. Web devs are like the ultimate polyglots. Or at least they were. I was regularly bouncing around between a half dozen languages when I was doing pro web dev. It was web devs who popularized numerous different languages to begin with simply because delivering apps through a browser allowed us a wide variety of options.
No web dev I have ever met could use Prolog well. I think your statement about web devs being polyglots is based upon the fact that web devs chase every industry fad. I think that has a lot to do with the nature and economics of web dev work (I'm not blaming the web devs for this). I mean the best way to succeed as a webdev is to write your own version of a framework that does the same thing as the last 10 frameworks but with better buzzword marketing.
Generally speaking, all the languages they know are pretty similar to each other. Bolting on lambdas isn't the same as doing pure FP. Also, anytime a problem comes up where you would actually need a weird language based upon different math, those problems will be assigned to some other kind of developer (probably one with a really strong CS background).
That you haven’t met any webdevs using prolog probably is because 1) prolog is a very rare language among devs in general not just webdevs (unless you count people that did prolog in a course 20 years ago and remember nothing) 2) prolog just isn’t that focused on webdev (like saying ”not many embedded devs know react so I guess it is because react is too hard for them”)
However, it is easy to add Prolog to a web page:
http://tau-prolog.org/
I have the complete opposite view of web developers. :)
Maybe the ones these days are different. I left the field probably 15 years ago.
Maybe they were, but these days everything must be in JS syntax. Even if it is longer than pure CSS, they want the CSS inside JS syntax. They are only ultimate polyglot as long as all the languages are actually JS.
(Of course this is an overgeneralization, since obviously, there are web developers, who do still remember how to do things in HTML, CSS and, of course JS.)
As someone who did deep learning research 2017-2023, I agree. "Neurosymbolic AI" seems very obvious, but funding has just been getting tighter and more restrictive towards the direction of figuring out things that can be done with LLMs. It's like we collectively forgot that there's more than just txt2txt in the world.
YES! I've run a few experiments on classical logic problems and an LLM can spit out Prolog programs to solve the puzzel. Try it yourself, ask an LLM to write some prolog to solve some problem and then copy paste it to https://swish.swi-prolog.org/ and see if it runs.
I think prolog is the right format to codify expertise in Claude Skills. I just haven’t tested it yet.
Wouldn’t that be like a special case of neuro-symbolic programming?! There are plenty of research going on
> LLMs are bad at counting the number of r's in strawberry.
This is a tokenization issue, not an LLM issue.
Can't find the links right now, but there were some papers on llm generating prolog facts and queries to ground the reasoning part. Somebody else might have them around.
There's a lot of work in this area. See e.g., the LoRP paper by Di et al. There's also a decent amount of work on the other side too, i.e., using LLMs to convert Prolog reasoning chains back into natural language.
I think that's what these guys are doing
https://www.symbolica.ai/
There are people working on integration deep learning with symbolic AI (but I don't know more)
If you are looking for AGI. And you understand what is going on inside of it - then it is obviously not AGI.
@goblinqueen, you around?
@YeGoblynQueenne Dunno if it will ping the person
It doesn't, but I found the thread anyway :)
I've been thinking a lot about this, and I want to build the following experiment, in case anyone is interested:
The experiment is about putting an LLM to play plman[0] with and without prolog help.
plman is a pacman like game for learning prolog, it was written by profesor Francisco J. Gallego from Alicante University to teach logic subject in computer science.
Basically you write solution in prolog for a map, and plman executes it step by step so you can see visually the pacman (plman) moving around the maze eating and avoiding ghost and other traps.
There is an interesting dynamic about finding keys for doors and timing based traps.
There are different levels of complexity, and you can also write easily your maps, since they are just ascii characters in a text file.
I though this was the perfect project to visually explain my coworkers the limit of LLM "reasoning" and what is symbolic reasoning.
So far I hooked ChatGPT API to try to solve scenarios, and it fails even with substancial amount of retries. That's what I was expecting.
The next thing would be to write a mcp tool so that the LLM can navigate the problem by using the tool, but here is where I need guidance.
I'm not sure about the best dynamic to prove the usefulness of prolog in a way that goes beyond what context retrieval or db query could do.
I'm not sure if the LLM should write the prolog solution. I want to avoid to build something trivial like the LLM asking for the steps, already solved, so my intuition is telling me that I need some sort of virtual joystick mcp to hide prolog from the LLM, so the LLM could have access to the current state of the screen, and questions like what would be my position if I move up ? What's the position of the ghost in next move ? where is the door relative to my current position ?
I don't have academic background to design this experiment properly. Would be great if anyone is interested to work together on this, or give me some advice.
Prior work pending on my reading list:
- LoRP: LLM-based Logical Reasoning via Prolog [1]
- A Pipeline of Neural-Symbolic Integration to Enhance Spatial Reasoning in Large Language Models [2]
- [0] https://github.com/Matematicas1UA/plman/blob/master/README.m...
- [1] https://www.sciencedirect.com/science/article/abs/pii/S09507...
- [2] https://arxiv.org/html/2411.18564v1
yes
Prolog really is such a fantastic system, if I can justify its usage then I won't hesitate to do so. Most of the time I'll call a language that I find to be powerful a "power tool", but that doesn't apply here. Prolog is beyond a power tool. A one-off bit of experimental tech built by the greatest minds of a forgotten generation. You'd it find deep in irradiated ruins of a dead city, buried far underground in a bunker easily missed. A supercomputer with the REPL's cursor flickering away in monochrome phosphor. It's sitting there, forgotten. Dutifully waiting for you to jack in.
When I entered university for my Bachelors, I was 28 years old and already worked for 5 or 6 years as a self-taught programmer in the industry. In the first semester, we had a Logic Programming class and it was solely taught in Prolog. At first, I was mega overwhelmed. It was so different than anything I did before and I had to unlearn a lot of things that I was used to in "regular" programming. At the end of the class, I was a convert! It also opened up my mind to functional programming and mathematical/logical thinking in general.
I still think that Prolog should be mandatory for every programmer. It opens up the mind in such a logical way... Love it.
Unfortunately, I never found an opportunity in my 11 years since then to use it in my professional practice. Or maybe I just missed the opportunities?????
Did they teach you how to use DCGs? A few months ago I used EDCGs as part of a de-spaghettification and bug fixing effort to trawl a really nasty 10k loc sepples compilation unit and generate tags for different parts of it. Think ending up with a couple thousand ground terms like:
tag(TypeOfTag, ParentFunction, Line).
Type of tag indicating things like an unnecessary function call, unidiomatic conditional, etc.
I then used the REPL to pull things apart, wrote some manual notes, and then consulted my complete knowledgebase to create an action plan. Pretty classical expert system stuff. Originally I was expecting the bug fixing effort to take a couple of months. 10 days of Prolog code + 2 days of Prolog interaction + 3 days of sepples weedwacking and adjusting the what remained in the plugboard.
This sounds interesting. Perhaps you could write a blog post about it? I'm always looking for use cases for Prolog
Prolog is a great language to learn. But I wouldn't want to use it for anything more than what its directly good at. Especially the cut operator, that's pretty mind bending. But once you get good at it, it all just flows. But I doubt more than 1% of devs could ever master it, even on an unlimited timeline. Its just much harder than any other type of non-research dev work.
In university, Learning prolog was my first encounter with the idea that my IQ may not be as high as I thought
I also found it mindbending.
But some parts, like e.g. the cut operator is something I've copied several times over for various things. A couple of prototype parser generators for example - allowing backtracking, but using a cut to indicate when backtracking is an error can be quite helpful.
"Keep your exclamation points under control. You are allowed no more than two or three per 100,000 words of prose."
Elmore Leonard, on writing. But he might as well have been talking about the cut operator.
At uni I had assignments where we were simply not allowed to use it.
That may make sense for Prolog code - I don't know Prolog enough to say. But the places I like to use it, it significantly simplified code by letting me write grammars with more local and specific error reporting.
That is, instead of continuing to backtrack, I'd use a cut-like operator to say "if you backtrack past this, then the error is here, and btw. (optionally) here is a nicer error message".
This could of course alter semantics. E.g. if I had a rule "expr ::= (foo ! bar) | (foo baz), foo baz would never get satisfied, whereas with "expr ::= (foo bar) | (foo baz)" it could. (And in that example, it'd be totally inappropriate in my parser generator too)
I'm guessing the potential to have non-local effects on the semantics is why you'd consider it problematic in Prolog? I can see it would be problematic if the cut is hidden away from where it would affect you.
In my use, the grammar files would typically be a couple of hundred lines at most, and the grammar itself well understood, and it was used explicitly to throw an error, so you'd instantly know.
There are (at least) two ways of improving on that, which I didn't bother with: I could use it to say "push the error message and location" and pop those errors if a given subtree of the parse was optional. Or I could validate that these operators don't occur in rules that are used in certain ways.
But in practice in this use I never ended up with big enough code that it seemed worth it, and would happily litter the grammars with lots of them.
I used to use a cut operator about every 2 to 4 rules. If you are constantly using it as error handling, I would agree you are using it too often. If you are using it to turn sets into scalars or cells, then you are using it correctly. It just makes the code really hard to reason about and maintain.
I thoroughly enjoyed doing all the exercises. It was challenging and hence, fun!
I don't think I ever learned how it can be useful other than feeding the mind.
There was a time when the thinking was you can load all the facts into a prolog engine and it would replace experts like doctors and engineers - expert systems, it didn't work. Now its a curiosity
intro to quantum physics for me (which is only sophomore) I noped out of advanced math/physics at that point, luckily I did learn to code on my own
I had more success with the Prolog language track on https://exercism.org/tracks/prolog
It's a mind-bending language and if you want to experience the feeling of learning programming from the beginning again this would be it
My prolog anecdote: ~2001 my brother and I writing an A* pathfinder in prolog to navigate a bot around the world of Asheron's Call (still the greatest MMORPG of all time!). A formative experience in what can be done with code. Others had written a plugin system (called Decal) in C for the game and a parser library for the game's terrain file format. We took that data and used prolog to write an A* pathfinder that could navigate the world, avoiding un-walkable terrain and even using the portals to shortcut between locations. Good times.
Two Prolog books that I find very interesting:
Advanced Turbo prolog - https://archive.org/details/advancedturbopro0000schi/mode/2u...
Prolog programming for artificial intelligence - https://archive.org/details/prologprogrammin0000brat_l1m9/mo...
There seems to an interesting difference between Prolog and conventional (predicate) logic.
In Prolog, anything that can't be inferred from the knowledge base is false. If nothing about "playsAirGuitar(mia)" is implied by the knowledge base, it's false. All the facts are assumed to be given; therefore, if something isn't given, it must be false.
Predicate logic is the opposite: If I can't infer anything about "playsAirGuitar(mia)" from my axioms, it might be true or false. It's truth value is unknown. It's true in some model of the axioms, and false in others. The statement is independent of the axioms.
Deductive logic assumes an open universe, Prolog a closed universe.
This is called the Closed World Assumption (CWA) - https://en.wikipedia.org/wiki/Closed-world_assumption
Prolog’s Closed-World Assumption: A Journey Through Time - https://medium.com/@kenichisasagawa/prologs-closed-world-ass...
Is Prolog really based on the closed-world assumption? - https://stackoverflow.com/questions/65014705/is-prolog-reall...
It's not really false I think. It's 'no', which is an answer to a question "Do I know this to be true?"
I think there should be a room for three values there: true, unprovable, false. Where false things are also unprovable. I wonder if Prolog has false, defined as "yes" of the opposite.
> It's not really false I think. It's 'no', which is an answer to a question "Do I know this to be true?"
I don't think so, because in this case both x and not-x could be "no", but I think in Prolog, if x is "no", not-x is "yes", even if neither is known to be true. It's not a three-valued logic that doesn't adhere to the law of the excluded middle.
If x is "no" (I do not know this to be true) then not-x is "yes" (I do know this to be true). So negation still works as usual.
"Yes" is not "true" but rather "provably true". And "no" is not "false" but rather "not provably true".
Third sensible value in this framework (which I think Prolog doesn't have) would be "false" meaning "it's provably false" ("the opposite of it is provably true").
To be frank I think Prolog in newer implementations completely abandoned this nuance and just call states "true" and "false" instead of "yes" and "no".
> If x is "no" (I do not know this to be true) then not-x is "yes" (I do know this to be true). So negation still works as usual.
As I said though, that doesn't make sense. Because if I don't know x to be true because it is not mentioned in the knowledge base, I also don't know not-x to be true. So both would have to be "no". But they aren't. Therefore the knowledge interpretation is incorrect. Knowledge wouldn't be closed under negation. If you don't know something to be true, that doesn't imply that you know it to be false.
You are right. If X is 'no' then not-X wouldn't necessarily be "yes".
After looking around I see that Prolog recognizes some nuance around not: https://en.wikipedia.org/wiki/Prolog#Negation
And aldready deprecated one 'not' operator:
https://www.swi-prolog.org/pldoc/man?predicate=not/1
Look at how they are not using not, but rather "not provable".
I'm not sure if Prolog has straight up negation behaving in binary arithmetic way.
I recently implemented an eagerly evaluated embedded Prolog dialect in Dart for my game applications. I used SWI documentation extensively to figure out what to implement.
But I think I had the most difficulty designing the interface between the logic code and Dart. I ended up with a way to add "Dart-defined relations", where you provide relations backed dynamically by your ECS or database. State stays in imperative land, rules stay in logic land.
Testing on Queens8, SWI is about 10,000 times faster than my implementation. It's a work of art! But it doesn't have the ease of use in my game dev context as a simple Dart library does.
Would you mind sharing your prolog dart lib?
Unfortunately it is closed source. My plan is to use it as (part of) the foundation for my game studio.
Do you have a different use case? I would be open to sharing it on a project- or time-limited basis in exchange for bug reports and feature requests.
I only read the first 88 pages of Prolog Programming in Depth but I found it to be the best introductory book for programming in Prolog because it presents down to earth examples of coding like e.g. reading a file, storing data. Most other books are mainly or only focused on the pure logic stuff of Prolog but when you program you need more.
Another way of getting stuff done would be to use another programming language with its standard library (with regex, networking, json, ...) and embed or call Prolog code for the pure logic stuff.
I've recently started modeling some of my domains/potential code designs in Prolog. I'm not that advanced. I don't really know Prolog that well. But even just using a couple basic prolog patterns to implement a working spec in the 'prolog way' is *unbelievably* useful for shipping really clean code designs to replace hoary old chestnut code. (prolog -> ruby)
I keep wishing for "regex for prolog", ie: being able to (in an arbitrary language) express some functional bits in "prolog-ish", and then be able to ask/query against it.
(not exactly that, but hopefully you get the gist)There's so much stuff regarding constraints, access control, relationship queries that could be expressed "simply" in prolog and being able to extract out those interior buts for further use in your more traditional programming language would be really helpful! (...at least in my imagination ;-)
You can do that in Racket, with the Racklog library¹. There's also Datalog² and MiniKanren and probably some other logic languages available.
[1] https://docs.racket-lang.org/racklog/index.html
[2] https://docs.racket-lang.org/datalog/index.html
While usually using native syntax rather than strings, somethign like that exists for most languages of any popularity (and many obscure ones), in the form of miniKanren implementations.
https://minikanren.org/
If you really want something that takes Prolog strings instead (and want the full power of prolog), then there are bindings to prolog interpreters from many languages, and also SWI-Prolog specifically provides a fairly straightforward JSON-based server mode "Machine Query Interface" that should be fairly simple to interface with any language.
https://www.swi-prolog.org/pldoc/man?section=mqi-overview
There are a bunch of libraries that will do this - here's one example of a python one: https://github.com/yuce/pyswip - and a ruby one: https://github.com/preston/ruby-prolog
Thanks for the reference! `pyswip` is the closest I've seen so far:
...will definitely keep it in my back pocket!pyswip is a one-way python-to-SWI-prolog interface; there's also a first-party (maintained as part of SWI-prolog), two-way one called janus-swi.
https://pypi.org/project/janus-swi/
https://www.swi-prolog.org/pldoc/man?section=janus-call-prol...
You might be interested in Flix:
https://play.flix.dev/?q=PQgECUFMBcFcCcB2BnUBDUBjA9gG15JtAJb...
is an embedded Datalog DB and query in a general-purpose programming language.
More examples on https://flix.dev/
You could try picat
[0] https://picat-lang.org/
What you mean is not "regex for Prolog" but an embedded PROLOG interpreter, which exists.
Ironically, the most common way I have seen people do this is use an embedded LISP interpreter, in which a small PROLOG is easily implemented.
https://www.metalevel.at/lisprolog/ suggests Lisprolog (Here are some embedded LISPs: ECL, PicoLisp, tulisp)
SWI-Prolog can also be linked against C/C++ code: https://stackoverflow.com/questions/65118493/is-there-any-re... https://sourceforge.net/p/gprolog/code/ci/457f7b447c2b9e90a0...
Racklog is an embedded PROLOG for Racket (Scheme): https://docs.racket-lang.org/racklog/
I've wished for the same kind of 'embed prolog in my ruby' for enumerating all possible cases, all invalid cases, etc in test suites. Interesting to know it's not just me!
Maybe try a Ruby Kanren implementation:
https://minikanren.org/
uKanren is conceptually small and simple, here's a Ruby implementation: https://github.com/jsl/ruby_ukanren
There are a bunch of libraries that will do this - here's one example of a python one: https://github.com/yuce/pyswip - and a ruby one: https://github.com/preston/ruby-prolog
I did try ruby-prolog. The deeper issue is that its just not prolog. Writing in actual prolog affords a lot of clarity and concision which would be quite noisy in ruby-prolog. To me, the difference was stark enough it wasn't worth any convenience already being in ruby was worth.
Porolog might be more to your liking.
https://www.rubydoc.info/gems/porolog#porolog-wiki
I wonder if there's examples of whole product architectures done in Prolog, seems like an elegant solution if done right. I've been looking for a concise way to model full architectures of my various projects, without relying on having a typical markdown file.
Which is separate from the actual types in the code.
Which is separate from the deployment section of the docs.
I studied prolog back in 2014. It was used in AI course. I found it very confusing: trying to code A*, N-Queens, or anything in it was just too much. Python, in contrast, was a god-send. I failed the subject twice in my MSc (luckily passing the MSc was based on the total average), but did a similar course in UC Berkeley, with python: aced it, loved it, and learned a lot.
Never again :D
A similar thing happened at my university in an Advanced Algorithms course. Students failed it so much, the university was forced to make the course easier to pass, by removing the minimum grade to pass.
I believe your case (and many other students) is that you couldn't abstract yourself from imperative programming (python) into logic programming (prolog).
It's a query language for graph database. You can write A* and N-Queens in SQL, but why?
Performance, far better performance. Same reason you ever use SQL. Prolog can do the same thing for very specific problems.
PS Prolog is a Horn clause solver. You characterizing it as a query language for a graph database, well it doesn't put you in the best light. It makes it seem like you don't understand important foundational CS math concepts.
I have no idea why are you dissing query languages. Software that makes those work is immensely complex and draws on a ton of CS math concepts and practical insights. But maybe you don't understand that.
I'm using SQL to do SQL things. And I'm sure when I somehow encounter the 1% of problems that prolog is the right fit for I'd be delighted to use it. However doing general algorithms in Prolog is as misguided as in SQL.
> I have no idea why are you dissing query languages.
I'm not. I'm pointing out that saying a Horn clause interpreter is a graph query language indicates a fundamental misunderstanding on your part. Prolog handles anything you want to say in formal logic very well (at the cost of not doing anything else well).
SQL on the other handle uses a completely different mathematical framework (relational algebra and set theory). This allows really effective optimization and query planning on top of a DB kernel.
A graph DB query language on the other hand should be based upon graph theory. Which is another completely different mathematical model. I haven't been impressed by the work in this area. I find these languages are too often dialects of SQL instead of a completely different thing based upon the correct mathematical model.
PS I used to write DBs. Discretion is the better part of valor here.
I remember writing a Prolog(ish) interpreter in Common Lisp in an 90's AI course in grad school for Theorem proving (which is essentially what Prolog is doing under the hood). Really foundational to my understanding of how declarative programming works. In an ideal world I would still be programming in Lisp and using Prolog tools.
> In an ideal world…
I see this sentiment a lot lately. A sense of missed nostalgia.
What happened?
In 20 years, will people reminisce about JavaScript frameworks and reminisce how this was an ideal world??
Speaking as someone who just started exploring Prolog and lisp, and ended up in the frozen north isolated from internet - access. The tools were initially locked/commercial only during a critical period, and then everyone was oriented around GUIs - and GUI environments were very hostile to the historical tools, and thus provided a different kind of access barrier.
A side one is that the LISP ecology in the 80s was hostile to "working well with others" and wanted to have their entire ecosystem in their own image files. (which, btw, is one of the same reasons I'm wary of Rust cough)
Really, it's only become open once more with the rise of WASM, systemic efficiency of computers, and open source tools finally being pretty solid.
I can tell you, from the year 2045, that running the worlds global economy on Javascript was the direct link to the annihilation of most of our freedom and existence. Hope this helps.
Lucky you and your multiverse. In our multiverse we vibe coded the economy until the LLM decided we needed to construct more paperclips.
It is not nostalgia. It is mathematical thought. It is more akin to to an equation and more provably correct. Closer to fundamental truth -- like touching fundamental reality.
I also see the CL or Tcl+C or Assembly as an ideal world.
I remember a project I did in undergrad with Prolog that would fit connecting parts of theoretical widgets together based on constraints about how different pieces could connect and it just worked instantly and it felt like magic because I had absolutely no clue how I would have coded that in Pascal or COBOL at that time. It blew my mind because the program was so simple.
Prolog is easily one of my favorite languages, and as many others in this thread, I first encountered it during university. I ended up teaching it for a couple of years (along with Haskell) and ever since, I've gone on an involuntary prolog bender of sorts once or twice a year. I almost always use it for Advent of code as well.
Hah. Found this book back at my dad's this past winter: https://imgur.com/a/CyG1E2P
Had never heard of it before, and this is first I'm hearing of it since.
Also had other cool old shit, like CIB copies of Borland Turbo Pascal 6.0, old Maxis games, Windows 3.1
Clocksin is the standard Prolog textbook used in universities. I studied from the 5th edition.
Nice. Do you use Prolog much today? If so, when do you reach for it?
No, just coming back to it for the intellectual curiosity, nothing more.
Declarative languages are fantastic to reason about code.
But the true power is unlocked once the underlying libraries are implemented in a way that surpassesthe performance that a human can achieve.
Since implementation details are hidden, caches and parallelism can be added without the programmer noticing anything else than a performance increase.
This is why SQL has received a boost the last decade with massively parallel implementations such as BigQuery, Trino and to some extent DuckDB. And what about adding a CUDA backend?
But all this comes at a cost and needs to be planned so it is only used when needed.
Look into Futhark, its a pure FP language (based on ML, ick) that outputs CUDA.
I'll never understand how it's a programming language not a graph database with query language. It's more MongoDb than Fortran.
Because its more powerful than MongoDb or Fortran. The cut operator for instance gives it the ability to express things you just can't do in those other systems. The trade-off is that mastering the cut operator is a rare skill and only that one person who can do it can maintain the Prolog code. Compare that with MongoDb where even the village idiot can use it but with a huge performance cost.
I don't know about MongoDB and its query language, but wrt Fortran, it's unreasonable to say that Prolog is more powerful than Fortran (or vice versa). A more reasonable statement is that Prolog is more expressive than Fortran (though this gets fuzzy, we have to define expressiveness in a way that lets us rank languages). But the power of a language normally means what we can compute using that language. Prolog and Fortran both have the same level of "power", but it's certainly fair to say that expressing many programs is easier in Prolog than Fortran, and there are some (thinking back to my scientific computing days) that are easier to express in Fortran than Prolog.
I would say most programs are easier in Fortran. But there are things you can't express in Fortran but you can in Prolog. There is nothing like the cut operator in Fortran for example. They are very different animals.
You seem to be confusing two different things: What is easily or natively expressed in the language, and what can be expressed in the language.
You can create a logical equivalent of the cut operator in Fortran if you wanted to, but there's no native mechanism or operator to rely on. The languages possess the same computing "power", the difference is not in what they can compute which is your claim with "there are things you can't express in Fortran but you can in Prolog" (utter nonsense). Anything you can get a Prolog program to do, you can get a Fortran program to do (and vice versa).
> You can create a logical equivalent of the cut operator in Fortran if you wanted to
In isolation, no you can't. You could implement a Prolog interpreter in Fortran however. And if you did that, you would be able to write a cut operator because then you are interacting directly with Prolog's machinery. Part of the definition of the cut operator involves changing how code around it behaves. You can't do this with Fortran (or other languages) normally. Then there is the entire concept of backtracking with isn't native in any other language (that I know of).
You could probably make a very poor cut operator in a language with an Any/Object type and casting but why would you. You are not wrong about the math. But you are ignoring the absurd amount of code you would have to write to do it. Its a bit hand-wavy to say because you can implement Prolog in a language, its just as powerful. Although that is mathematically correct but in practice it really isn't.
I think cut operator doesn't make sense for any other language because prolog doesn't execude code linerily. It executes it with depth first search with backtracking. Only when you have a thing that walks the tree it makes sense to have a cut operator that prevents backtracking at some spots.
I don't think the person you responded to knows what the cut operator is, or they wouldn't have written any of their nonsense comments. They seem to think that it's some magical thing and not, as you wrote, a way to stop backtracking from going back through some point. You can implement that in any appropriate search system in any language. It might not be an operator, but it would carry the same meaning and effect.
There's also another difference. MongoDb and Fortran served a purpose.
Previously:
Learn Prolog Now - https://news.ycombinator.com/item?id=9246897 - March 2015 (72 comments)
Learn Prolog now - https://news.ycombinator.com/item?id=1976127 - Dec 2010 (31 comments)
How many Prolog programmers does it take to change a lightbulb?
No.
Always felt this would be language that Sherlock Holmes would use...so be sure to wear the hat when learning it
“A touch! A distinct touch!” cried Holmes. "You are developing a certain unexpected vein of pawky humour, Watson, against which I must learn to guard myself".
-- from "The Valley of Fear" by Arthur Conan Doyle.
That's it, I'm convinced. Now I'm doing my next startup in Prolog.
Thats the spirit. Make sure someone else is paying.
I recently asked @grok about Prolog being useless incomprehensible shit for anything bigger than one page:
Professionals write Prolog by focusing on the predicates and relations and leaving the execution flow to the interpreter. They also use the Constraint Logic Programming extensions (like clpfd) which use smart, external algorithms to solve problems instead of relying on Prolog's relatively "dumb" brute-force search, which is what typically leads to the "exploding brain" effect in complex code.
--- Worth mentioning here is that I wrote Prolog all on my own in 1979. On top of Nokolisp of course. There was no other functioning Prolog at that time I knew about.
Thereafter I have often planned "Infinity-Prolog" which can solve impossible problem with lazy evaluation.
I just learned from @grok that this Constraint Logic is basically what was aiming at.
How many times are you going to "write" the same comment in one discussion?
There are declarative languages like SQL and XSLT.
And then there are declarative languages like Prolog.
I really enjoyed learning Prolog in university, but it is a weird language. I think that 98% of tasks I would not want to use Prolog for, but for that remaining 2% of tasks it's extremely well suited for. I have always wished that I could easily call Prolog easily from other languages when it suited the use case, however good luck getting most companies to allow writing some code in Prolog.
That is where Lisp or Scheme weirdly shines. It is incredibly easy to add prolog to a Lisp or a Scheme. It’s almost as if it comes out naturally if you just go down the rabbit hole.
“The little prover” is a fantastic book for that. The whole series is.
I worked through the little scheme but not the little prover, I think Ill take a look at that. Thanks.
One can of course add the same stuff to other languages in form of libraries and stuff, but lisp/scheme make it incredibly easy to make it look like part of the language itself and make seem a mere extension of the language. So you can have both worlds if you want to. Lisp/scheme is not dead.
In fact, in recent years people have started contributing again and are rediscovering the merits.
Racket really shines in this regard: Racket makes it easy to build little DSLs, but they all play perfectly together because the underlying data model is the same. Example from the Racket home page: https://racket-lang.org/#any-syntax
You can have a module written in the `#racket` language (i.e., regular Racket) and then a separate module written in `#datalog` and the two can talk to each other!
[dead]
But I don't wanna!
I love Prolog, and have seen so many interesting use cases for it.
In the end though, it mostly just feels enough of a separate universe to any other language or ecosystem I'm using for projects that there's a clear threshold for bringing it in.
If there was a really strong prolog implementation with a great community and ecosystem around, in say Python or Go, that would be killer. I know there are some implementations, but the ones I've looked into seem to be either not very full-blown in their Prolog support, or have close to non-existent usage.
What kind of problems is Prolog helping to solve besides GOFAI, theorem proving and computational linguistics?
Here's a page with some examples of use-cases that fit Prolog well: https://www.metalevel.at/prolog/business
"Sometimes, when you introduce Prolog in an organization, people will dismiss the language because they have never heard of anyone who uses it. Yet, a third of all airline tickets is handled by systems that run SICStus Prolog. NASA uses SICStus Prolog for a voice-controlled system onboard the International Space Station. Windows NT used an embedded Prolog interpreter for network configuration. New Zealand's dominant stock broking system is written in Prolog and CHR. Prolog is used to reason about business grants in Austria."
Some other notable real projects using Prolog are TerminusDB, the PLWM tiling window manager, GeneXus (which is a kind of a low-code platform that generated software from your requirements before LLMs were a thing), the TextRazor scriptable text-mining API. I think this should give you a good idea of what "Prolog-shaped" problems look like in the real world.
Others have more complete answers, but the value for me of learning Prolog (in college) was being awakened to a refreshingly different way of expressing a program. Instead of saying "do this and this and this", you say "here's what it would mean for the program to be done".
At work, I bridged the gap between task tracking software and mandatory reports (compliance, etc.). Essentially, it handles distributing the effective working hours of workers across projects, according to a varied and very detailed set of constraints (people take time off, leave the company and come back, sick days, different holidays for different remote workers, folks work on multiple stuff at the same time, have gaps in task tracking, etc.).
In the words of a colleague responsible for said reports it 'eliminated the need for 50+ people to fill timesheets, saves 15 min x 50 people x 52 weeks per year'
It has been (and still is) in use for 10+years already. I'd say 90% of the current team members don't even know the team used to have to "punch a clock" or fill timesheets way back.
Prolog's constraint solving and unification are exactly what is required for solving type-checking constraints in a Hindley-Milner type system.
Yes, absolutely...I just wish the people who wrote FP compilers knew this.
Any kind of problem involving the construction, search or traversal of graphs of any variety from cyclic semi-directed graphs to trees, linear programming, constraint solving, compilers, databases, formal verification of any kind not just theorem proving, computational theory, data manipulation, and in general anything.
See my comment here - https://news.ycombinator.com/item?id=45902960
Scheduling, relational modeling, parsing. These things come up all the time. Look at DCG:s if you want to quickly become dangerous.
The background image says "testing version" - is there a production version?
It looks like that is in reference to the embedded interactive code blocks. If you use uBlock Origin you can use the element picker to remove the annoying image.
is prolog a use-case language or is it as versatile as python?
Python wins out in the versatility conversation because of its ecosystem, I'm still kinda convinced that the language itself is mid.
Prolog has many implementations and you don't have the same wealth of libraries, but yes, it's Turing complete and not of the "Turing tarpit" variety, you could reasonably write entire applications in SWI-Prolog.
Right, Python is usually the second-best choice for a language for any problem --- arguably the one thing it is best at is learning to program (in Python) --- it wins based on ease-of-learning/familiarity/widespread usage/library availability.
Personally I find Python more towards the bottom of the list with me, despite being the language I learned on. Especially if the code involved is "pythonic". Just doesn't jive with my neurochemistry. All the problems of C++ with much greater ambiguity, and I've never really been impressed with the library ecosystem. Yeah there's a lot, but just like with node it's just a mountain of unusably bad crap.
I think lua is the much better language for a wide variety of reasons (Most of the good Python libraries are just wrappers around C libraries, which is necessary because Python's FFI is really substandard), but I wouldn't reach for python or lua if I'm expecting to write more than 1000 lines of code. They both scale horribly.
I don't know if I would say its second-best. It just happened to get really popular because it has relatively easy syntax, and Numpy is a really great library making all of those scientific packages that people were using Fortran and C++ for before available in an easier language. This boosted the language, right when data science became a thing, right when dynamic programming became popular, right when there was a boost in Learn 2 Code forget about learning fundamentals was a thing. Its an okay language I guess, but I really think it was lucky that Numpy exists and Numby or Numphp.
That's not why Python is popular. Python is popular because universities don't provide technical support to researchers (which they should). So those researchers picked up the scripting language the sysops in the univ clusters were using. Those same researchers left academia but never learned any CS or other programming languages. Instead they used the 'if all you have is a hammer, everything is a nail' logic and used Python to glue together libraries, mostly written in C.
PS The big companies that actually make the LLMs, don't use Python (anymore). Its a lousy language for ML/AI. Its designed to script Linux GUIs and automate tasks. Its started off as a Perl replacement afterall. And this isn't a slight on the folks who write Python itself. It is a problem for all the folks who insist on slamming it into all sorts of places that it isn't well suited because they won't learn any CS.
More like 3rd to 5th best is most categories. There's just a lot of categories.
Its ease of use and deployment give it a lot more staying power.
The syntax is also pretty nice.
FWIK; You can't compare the two. Python is far more general and larger than Prolog which is more specialized. However there have been various extensions to Prolog to make it more general. See Extensions section in Prolog wikipedia page - https://en.wikipedia.org/wiki/Prolog#Extensions Eg. Prolog++ - https://en.wikipedia.org/wiki/Prolog%2B%2B to allow one to do large-scale OO programming with Prolog.
Earlier, Prolog was used in AI/Expert Systems domains. Interestingly it was also used to model Requirements/Structured Analysis/Structured Design and in Prototyping. These usages seems interesting to me since there might be a way to use these techniques today with LLMs to have them generate "correct" code/answers.
For Prolog and LLMs see - https://news.ycombinator.com/item?id=45712934
Some old papers/books that i dug up and seem relevant;
Prototyping analysis, structured analysis, Prolog and prototypes - https://dl.acm.org/doi/10.1145/57216.57230
Prolog and Natural Language Analysis by Fernando C. N. Pereira and Stuart M. Shieber (free digital edition) - http://www.mtome.com/Publications/PNLA/pnla.html
The Application of Prolog to Structured Design - https://www.researchgate.net/publication/220281904_The_Appli...
In theory, it's as versatile as Python et al[0] but if you're using it for, e.g., serving bog-standard static pages over HTTP, you're very much using an industrial power hammer to apply screws to glass - you can probably make it work but people will look at you funny.
[0] Modulo that Python et al almost certainly have order(s) of magnitude more external libraries etc.
> you can probably make it work but people will look at you funny
Don't threaten me with a good time
It's a language that should have just been a library. There's nothing noteworthy about it and it's implementable in any working language. Sometimes quite neatly. Schelog is a famous example.
That's like comparing a nuclear reactor to a pickup truck. They are different things and one doesn't replace the other in any meaningful way.
Do you mean Northern Conservative Baptist Great Lakes Region Council of 1879 standard Prolog?[2]
SWI Prolog (specifically, see [2] again) is a high level interpreted language implemented in C, with an FFI to use libraries written in C[1], shipping with a standard library for HTTP, threading, ODBC, desktop GUI, and so on. In that sense it's very close to Python. You can do everyday ordinary things with it, like compute stuff, take input and output, serve HTML pages, process data. It starts up quickly, and is decently performant within its peers of high level GC languages - not v8 fast but not classic Java sluggish.
In other senses, it's not. The normal Algol-derivative things you are used to (arithmetic, text, loops) are clunky and weird. It's got the same problem as other declarative languages - writing what you want is not as easy as it seemed like it was going to be, and performance involves contorting your code into forms that the interpreter/compiler is good with.
It's got the problems of functional languages - everything must be recursion. Having to pass the whole world state in and out of things. Immutable variables and datastructures are not great for performance. Not great for naming either, temporary variable names all over.
It's got some features I've never seen in other languages - the way the constraint logic engine just works with normal variables is cool. Code-is-data-is-code is cool. Code/data is metaprogrammable in a LISP macro sort of way. New operators are just another predicate. Declarative Grammars are pretty unique.
The way the interpreter will try to find any valid path through your code - the thing which makes it so great for "write a little code, find a solution" - makes it tough to debug why things aren't working. And hard to name things, code doesn't do things it describes the relation of states to each other. That's hard to name on its own, but it's worse when you have to pass the world state and the temporary state through a load of recursive calls and try to name that clearly, too.
This is fun:
It's a recursive countdown. There's no deliberate typos in it, but it won't work. The reason why is subtle - that code is doing something you can't do as easily in Python. It's passing a Prolog source code expression of X-1 into the recursive call, not the result of evaluating X-1 at runtime. That's how easy metaprogramming and code-generation is! That's why it's a fun language! That's also how easy it is to trip over "the basics" you expect from other languages.It's full of legacy, even more than Python is. It has a global state - the Prolog database - but it's shunned. It has two or three different ways of thinking about strings, and it has atoms. ISO Prolog doesn't have modules, but different implementations of Prolog do have different implementations of modules. Literals for hashtables are contentious (see [2] again). Same for object orientation, standard library predicates, and more.
[1] https://www.swi-prolog.org/pldoc/man?section=foreign
[2] https://news.ycombinator.com/item?id=26624442
Learn Datalog Now!
Is there a WebAssembly WASI version of swi prolog ?
Not sure... Other Prologs compiled to WASM with very good performance is https://ciao-lang.org/playground/
The same toplevel runs also from 'node' as well.
Thanks. I will have a look. I would like to integrate one Prolog in exaequOS.
We had it in university courses and it seemed useless. DSL for backtracking.
Yes. As an add-on or library, it could be useful, but as a language it's just a forgotten dead end.
And for some cases it's easier to understand if you write the backtracking yourself, and can edit/debug it. That is in case you write readable code professionally, as such algorhythms are not very intuitive for a person who sees it first time.
Learn it now? I learned back in the 80s... and have since forgotten
You might have forgotten the language but I bet it must have had some influence on how you think or write programs today. I don’t think the value of learning Prolog is necessarily that you can then write programs in Prolog, but that it shifts your perspective and adds another dimension to how you approach problems. At least this is what it has done for me and I find that still valuable today.
yes
[flagged]
Farts:
what the fuck does this mean?
are you saying you can’t comprehend prolog programs?
It means that timonoko doesn't like to think and would rather ask grok to think for them and post weird comments about it here on HN. They've been doing this for a while.