Posted by mpweiher 20 hours ago
(I think it's almost impossible to convince your interviewer into constraint solvers, while the concept itself is great)
I played it for a while when interest rates were really low and used the thing for my own rainy day savings(I did get tired changing accounts all the time)
It usually starts taking a few seconds around the ~150-variable mark, and hits the absolute limit of practicality by 350–800 variables; the number of clauses is only an order of magnitude higher. Perhaps something about the many dependencies in a DFA graph puts this problem near the worst case.
The annoying thing is, there do seem to be heuristics people have written for this stuff (e.g., in FlexFringe [1]), but they're all geared toward probabilistic automata for anomaly detection and similar fuzzy ML stuff, and I could never figure out how to get them to work for ordinary automata.
In any case, I eventually figured out that I could get a rough lower bound on the minimum solution size, by constructing a graph of indistinguishable strings, generating a bunch of random maximal independent sets, and taking the best of those. That gave me an easy way to filter out the totally hopeless instances, which turned out to be most of them.
Of course, NP hard problems become complex at an exponential rate but that doesn't change if you use another exact solving technique.
Using local-search are very useful for scaling but at the cost of proven optimality
But I think if you have constraint problem, that has an efficient algorithm, but chokes a general constraint solver, that should be treated as a bug in the solver. It means that the solver uses bad heuristics, somewhere.
Like it might even be the case that certain types of pretty powerful DSLs just never generate "bad structures". I don't know, I've not done research on circuits, but this kind of analysis shows up all the time in other adjacent fields.
Your best bet using them is when you have a large collection of smaller unstructured problems, most of which align with the heuristics.
Agreed. An algorithm right now in our company turns a directed graph problem, which to most people would seem crazy, into roughly ~m - n (m edges, n nodes) SAT checks that are relatively small. Stuffing all the constraints into an ILP solver would be super inefficient (and honestly undefined). Instead, by defining the problem statement properly and carving out the right invariants, you can decompose the problem to smaller NP-complete problems.
Definitely a balancing act of design.
Then the final 10% is either NP hard, or we want to add some DSL flexibility which introduces halting problem issues. Once you lower it enough, then comes the SMT solvers.
The conventional wisdom is the larger you make an NP hard problem, the slower is going to get. Irregardless of algorithm.
> Most constraint solving examples online are puzzles, like Sudoku or "SEND + MORE = MONEY". Solving leetcode problems would be a more interesting demonstration.
He's exactly right about what tutorials are out there for constraint programming (I've messed around with it before, and it was pretty much Sudoku). Having a large body of existing problems to practice against is great.
Well said. One of the big benefits of general constraint solvers is their adaptability to requirements changes. Something I learned well when doing datacenter optimization for Google.
The reason was that aboint 70% of candidates couldn't write a simple loop -- to filter those out. The actual solution didn't matter much, I gave a binary decision. The actual conversation matters more.
Somehow someone figured that giving harder problems should result in better candidates. Personally, despite having passed most of the tests I've been subjected to, I don't see the connection.
% Given a set of coin denominations,
% find the minimum number of coins
% required to make change.
% IE for USA coinage and 37 cents,
% the minimum number is four
% (quarter, dime, 2 pennies).
num(0). num(1). num(2).
num(3). num(4). num(5).
?- num(Q), num(D), num(P),
37 is Q * 25 + D * 10 + P
You can just paste it into [1] to execute in the browser. Using 60 as target sum is more interesting as you can enumerate over two solutions.(Posting again what I already posted two days ago [2] here)
[1]: https://quantumprolog.sgml.net/browser-demo/browser-demo.htm...
coins = [100,50,25,10,5,1]
change = 1234;
result = [0,0,0,0,0,0];
for(i=0:i<coins.length;i++){
while(change>coins[i]){
result[i]++;
change-=coins[i];
}
}
//[12,0,1,1,4]
Coudnt help myself sorry function coin_change(change) {
const coins = [25, 10, 5, 1]
for (const coin of coins) {
const n = change / coin | 0
change -= n * coin
console.log(coin, n)
}
}
coin_change(25+10+5+1)