Well the bigger question is how big does the system have to be to warrant breaking out a new technique, much less adding a new runtime or other large dependency.
Now, I have no direct experience with any of the common logical programming systems. I have familiarity.
But anytime I came upon anything that might justify such a system, the need just didn’t seem to justify it.
Talking less than 100 rules. Most likely less than a couple dozen. Stacking some IFs and a bit of math, strategically grouped in a couple aptly named wrapper methods to help reduce the cognitive load, and it’s all worked pretty well.
And, granted, if I had solid experience using these systems, onboarding cost would be lower.
When have you found it to be worth cutting over?
I had the (self-inflicted) problem of modeling systems of linear equations in C#. `x` is a Sym, `x+2` is an Expr, `[2x,3y]` is a TermVector, etc. I wanted the comforts of NumPy, so adding an Int to a SymVector should make an ExprVector by broadcast, you should be able to multiply two IntMatrixes together but not two SymMatrixes (since that's not linear), etc. It would have been a lot of wrapper code to write.
Instead, I implemented a minimal set of primitives, and wrote a set of derivation rules (e.g. "if you have X+Y, and Y supports negation, you can derive X-Y by X+(-Y)"), and constraints (operator overloads mustn't have ambiguous signatures, no cycles allowed in the call tree), and set up a code generator.
250 lines of Prolog, plus another 250 of ASP (a dialect of Prolog), and I had a code synthesizer.
it was one of the most magical experiences of my entire career. I'd write an optimized version of a function, rerun synthesis and it would use it everywhere it could. I'd add new types and operators and it'd instantly plumb them through. seeing code synthesis dance for you feels amazingly liberating. it's like the opposite of technical debt.
For a simple problem, the equivalent of one customer demand, something like n-queens, 'placements can't conflict', the difference isn't very large between Prolog and, say, Java.
https://www.metalevel.at/queens/
https://leetcode-in-java.github.io/src/main/java/g0001_0100/...
That is, if you manage to figure out your own special case rule engine rather than a nest of if:s and for:s growing more organically.
If you have ten of these, e.g. more dimensions that would result in conflicts or constraints on where placement is possible in the domain, the Java (or PHP or JavaScript or whatever) solution is likely to turn out rather inscrutable. At least that's my experience in ERP and CRM-adjacent systems where I've spent considerable time figuring out and consolidating many years of piecemeal additions of constraint threading in things like planning and booking tasks and the like.
Sometimes I've scratched up algebraic expressions or standalone Prolog implementations to suss out what simpler code ought to look like.
Absolutely valid and befitting point - adding complexity without clear benefits never should be justified. Most (business) applications have ruleset logic for specific problems not exceeding a dozens of rules - regular code often works fine.
Logic systems tend to show the value when rules become complex with many interdependencies or non-linear execution patterns emerge, or rules change frequently or need to be defined at runtime; when you need explanation tools - e.g., why was this conclusion reached?, etc.
I agree, situations for when you need to implement a logic system are not extremely common, but maybe I just have not worked in industries where they are - on top of my head I can think of: factory floor scheduling; regulatory compliance (e.g., complex tax rules); insurance systems, risk-calculation (credit approval); strategy games; retail - complex discounting; etc.