Top
Best
New

Posted by mpweiher 9/12/2025

Many hard LeetCode problems are easy constraint problems(buttondown.com)
679 points | 533 commentspage 2
RomanPushkin 9/12/2025|
- Constraint solvers? That's a nice concept, I heard about this once. However, for the purposes of the interview, let's just write some Python code, I wanna see your way of thinking...

(I think it's almost impossible to convince your interviewer into constraint solvers, while the concept itself is great)

aaronblohowiak 9/12/2025||
import z3
cerved 9/12/2025||
from ortools.sat.python import cp_model
henry2023 9/12/2025||
Have you ever tried or this is your assumption of what the interviewer would say.
itissid 9/12/2025||
Long time ago, just for fun, I wrote a constraint solver problem that could figure out which high yield banks to put money into that were recommended on doctor of credit(https://www.doctorofcredit.com/high-interest-savings-to-get/) based on <= `X` money and <= `Y` # of transactions on debit cards maximize the yield and other constraints(boolean and real valued)

I played it for a while when interest rates were really low and used the thing for my own rainy day savings(I did get tired changing accounts all the time)

toomuchtodo 9/12/2025|
Repo?
krosaen 9/12/2025||
I find this post interesting independent of the question of whether leetcode problems are a good tool for interviews. It's: here are some kinds or problems constraint solvers are useful for. I can imagine a similar post about non-linear least squared solvers like ceres.
smj-edison 9/12/2025|
Yeah, especially for learning how to use a solver!

> Most constraint solving examples online are puzzles, like Sudoku or "SEND + MORE = MONEY". Solving leetcode problems would be a more interesting demonstration.

He's exactly right about what tutorials are out there for constraint programming (I've messed around with it before, and it was pretty much Sudoku). Having a large body of existing problems to practice against is great.

drob518 9/12/2025||
SAT, SMT, and constraint solvers are criminally underutilized in the software industry. We need more education about what they are, how they work, and what sorts of problems they can solve.
LegionMammal978 9/12/2025||
At least personally, I've been very underwhelmed by their performance when I've tried using them. Usually past a few dozen variables or so is when I start hitting unacceptable exponential runtimes, especially for problem instances that are unsatisfiable or barely-satisfiable. Maybe their optimizations are well-suited for knapsack problems and other classic OR stuff, but if your problem doesn't fit the mold, then it's very hit-or-miss.
zaxioms 9/13/2025|||
I'm surprised to hear this. Modern SAT solvers can easily handle many problems with hundreds of thousands of variables and clauses. Of course, there are adversarial problems where CDCL solvers fail, but I would be fascinated if you can find industrial (e.g. human written for a specific purpose) formulas with "dozens of variables" that a solver can't solve fairly quickly.
LegionMammal978 9/13/2025||
One thing that I spent a particularly long time trying to get working was learning near-minimum-size exact languages from positive and negative samples. DFAMiner [0] has a relatively concise formulation for this in terms of variables and clauses, though I have no way to know if some other reformulation would be better suited for SAT solvers (it uses CaDiCaL by default).

It usually starts taking a few seconds around the ~150-variable mark, and hits the absolute limit of practicality by 350–800 variables; the number of clauses is only an order of magnitude higher. Perhaps something about the many dependencies in a DFA graph puts this problem near the worst case.

The annoying thing is, there do seem to be heuristics people have written for this stuff (e.g., in FlexFringe [1]), but they're all geared toward probabilistic automata for anomaly detection and similar fuzzy ML stuff, and I could never figure out how to get them to work for ordinary automata.

In any case, I eventually figured out that I could get a rough lower bound on the minimum solution size, by constructing a graph of indistinguishable strings, generating a bunch of random maximal independent sets, and taking the best of those. That gave me an easy way to filter out the totally hopeless instances, which turned out to be most of them.

[0] https://github.com/liyong31/DFAMiner

[1] https://github.com/tudelft-cda-lab/FlexFringe

cerved 9/12/2025||||
I've worked on a model with thousands of variables and hundreds of thousands of parameters with a hundred constraints. There are pitfalls you need to avoid, like reification, but it's definitely doable.

Of course, NP hard problems become complex at an exponential rate but that doesn't change if you use another exact solving technique.

Using local-search are very useful for scaling but at the cost of proven optimality

j2kun 9/12/2025||||
I think this hits the nail on the head: performance is the obstacle, and you can't get good performance without some modeling expertise, which most people don't have.
drob518 9/12/2025||
Hence my call for more education.
js8 9/12/2025||||
I wish I knew better how to use them for these coding problems, because I agree with GP they're underutilized.

But I think if you have constraint problem, that has an efficient algorithm, but chokes a general constraint solver, that should be treated as a bug in the solver. It means that the solver uses bad heuristics, somewhere.

LegionMammal978 9/12/2025||
I'm pretty sure that due to Rice's theorem, etc., any finite set of heuristics will always miss some constraint problems that have an efficient solution. There's very rarely a silver bullet when it comes to generic algorithms.
sigbottle 9/12/2025|||
I think they're saying that the types of counter-examples are so pathological in most cases that if you're doing any kind of auto-generation of constraints - for example, a DSL backed by a solver - should have good enough heuristics.

Like it might even be the case that certain types of pretty powerful DSLs just never generate "bad structures". I don't know, I've not done research on circuits, but this kind of analysis shows up all the time in other adjacent fields.

LegionMammal978 9/12/2025||
Idk, I also thought so once upon the time. "Everyone knows that you can usually do much better than the worst case in NP-hard problems!" But at least for the non-toy problems I've tried using SAT/ILP solvers for, the heuristics don't improve on the exponential worst case much at all. It's seemed like NP-hardness really does meet the all-or-nothing stereotype for some problems.

Your best bet using them is when you have a large collection of smaller unstructured problems, most of which align with the heuristics.

sigbottle 9/12/2025|||
> Your best bet using them is when you have a large collection of smaller unstructured problems, most of which align with the heuristics.

Agreed. An algorithm right now in our company turns a directed graph problem, which to most people would seem crazy, into roughly ~m - n (m edges, n nodes) SAT checks that are relatively small. Stuffing all the constraints into an ILP solver would be super inefficient (and honestly undefined). Instead, by defining the problem statement properly and carving out the right invariants, you can decompose the problem to smaller NP-complete problems.

Definitely a balancing act of design.

drob518 9/12/2025|||
For some problems, there is not much you can do. But for many, it works.
zaxioms 9/13/2025|||
Rice's theorem is about decidability, not difficulty. But you are right that assuming P != NP there is no algorithm for efficient SAT (and other constraint) solving.
drob518 9/12/2025|||
Well, they aren’t magic. You have to use them correctly and apply them to problems that match how they work. Proving something is unsat is worst case NP. These solvers don’t change that.
LegionMammal978 9/12/2025||
Of course they aren't magic, but people keep talking about them as if they're perfectly robust and ready-to-use for any problem within their domain. In reality, unless you have lots of experience in how to "use them correctly" (which is not something I think can be taught by rote), you'd be better off restricting their use to precisely the OR/verification problems they're already popular for.
drob518 9/12/2025||
Hence my statement about education. All tools must be used correctly in their proper domain, that is true. Don’t try to drive screws with a hammer. But I'm curious what problems you tried them on and found them wanting and what your alternative was? I actually find that custom solutions work better for simple problems and that solvers do a lot better when the problem complexity grows. You’re better off solving the Zebra puzzle and its ilk with brute force code, not a solver, for instance.
loeg 9/12/2025|||
In what way? They're useful for toy problems like this but they're very slow on larger problems.
drob518 9/12/2025|||
SAT solvers are used daily to generate solutions for problems that have literally millions of variables. So, what you said is just wrong on the face. Yes, some talented people can write custom code that solves specific problems faster than a general purpose solver, particularly for easy special cases of the general problem, but most of the time that results in the programmer recreating the guts of a solver customized to a specific problem. There’s sort of a corollary of Greenspun’s Tenth Rule that every sufficiently complicated program also contains an ad hoc, informally-specified, bug-ridden, slow-implementation of half of a SAT or SMT solver.
sigbottle 9/12/2025||
I mean right tool for the right job. Plenty of formulations and problems (our job has plenty of arbitrarily hard graph algorithms) that have 90% of the problem just being a very clever reduction with nice structure.

Then the final 10% is either NP hard, or we want to add some DSL flexibility which introduces halting problem issues. Once you lower it enough, then comes the SMT solvers.

cerved 9/12/2025|||
Define large. We've written model which solves real business issues in 8K lines of MiniZinc and it wasn't slow.

The conventional wisdom is the larger you make an NP hard problem, the slower is going to get. Irregardless of algorithm.

Qem 9/14/2025||
What are some good books to get started on the subject?
j2kun 9/12/2025||
> The real advantage of solvers, though, is how well they handle new constraints.

Well said. One of the big benefits of general constraint solvers is their adaptability to requirements changes. Something I learned well when doing datacenter optimization for Google.

shoo 9/13/2025|
Agreed - adaptability to changing requirements is a very strong advantage for real-world work.

The first attempt to formalise some practical problem into a mathematical optimisation problem is often not quite right. You discover new things that change the problem statement after reviewing example solutions with experts who exclaim "that solution couldn't possibly work, because <discovery of additional requirements>".

A neat dynamic programming solution is a glorious thing, but dynamic programming relies on a mathematical problem with some elegant recursive substructure that you can exploit. Sometimes such elegant recursive substructure is going to be broken by additional requirements or side constraints that you discover during the project -- so the real valuable problem you actually need to solve has no such substructure, and you need to throw out your dynamic programming approach and go back to a blank sheet of paper to design a viable attack on the real valuable messy problem.

For lower-risk delivery of useful outcomes, it probably makes the most sense to use general purpose flexible black box solvers like constraint or MIP solvers early in the project, and focus efforts on rapidly discovering and extracting all the requirements, so you zero in on the real problem quicker. You don't want to prematurely invest a lot of time and effort building a highly efficient bespoke elegant solution to the wrong problem, then discover a month before a delivery deadline that everything needs to be thrown out.

tannhaeuser 9/12/2025||
Here's an easy ad-hoc Prolog program for the first problem:

    % Given a set of coin denominations,
    % find the minimum number of coins
    % required to make change.
    % IE for USA coinage and 37 cents,
    % the minimum number is four
    % (quarter, dime, 2 pennies).
    num(0). num(1). num(2).
    num(3). num(4). num(5).
    ?- num(Q), num(D), num(P),
       37 is Q * 25 + D * 10 + P
You can just paste it into [1] to execute in the browser. Using 60 as target sum is more interesting as you can enumerate over two solutions.

(Posting again what I already posted two days ago [2] here)

[1]: https://quantumprolog.sgml.net/browser-demo/browser-demo.htm...

[2]: https://news.ycombinator.com/item?id=45205030

6gvONxR4sf7o 9/12/2025||
Of course, the challenge is that the next question after solving a leetcode problem is often to explain and optimize the performance characteristics, which in prolog can get stupidly hairy.
skydhash 9/12/2025||
I've actually used pseudo-prolog to explain how to solve leetcode problems to a friend. Write the facts, then write the constraints, and then state your problem. Close to the last part, they've already understood how to solve it, or at least how to write the program that can answer the question.
amai 9/12/2025||
The documentation for Googles OR tools comes with many interesting examples of constraint problems, e.g.

https://developers.google.com/optimization/lp/stigler_diet

qnleigh 9/12/2025||
I agree with the other comments here that using a constraint solver defeats the purpose of the interview. But this seems like a good case for learning how to use a constraint solver! Instead of spending hours coding a custom solution to a tricky problem, you could use a constraint solver at first and only write a custom solution if it turns out to be a bottleneck.
cobbzilla 9/12/2025||
Here’s my empirical evidence based on several recent “coding session” interviews with a variety of software companies. Background: I have been developing software for over 30 years, I hold a few patents, I’ve had a handful of modestly successful exits. I kind of know a little bit about what I am doing. At this stage in my career, I am no longer interested in the super early stage startup lifestyle, I’m looking at IC/staff engineer type roles.

The mature, state-of-the-art software companies do not give me leetcode problems to solve. They give me interesting & challenging problems that force me to both a) apply best practices of varying kinds and yet b) be creative in some aspects of the solution. And these problems are very amenable to “talking through” what I’m doing, how I’m approaching the solution, etc. Overall, I feel like they are effective and give the company a good sense of how I develop software as an engineer. I have yet to “fail” one of these.

It is the smaller, less mature companies that give me stupid leetcode problems. These companies usually bluntly tell me their monolithic codebase (always in a not-statically-typed language), is a total mess and they are “working on domain boundaries”.

I fail about 50% of these leetcode things because I don’t know the one “trick” to yield the right answer. As a seasoned developer, I often push back on the framing and tell them how I would do a better solution by changing one of the constraints, where the change would actually better match the real world problem they’re modeling.

And they don’t seem to care at all. I wonder if they realize that their bullshit interviewing process has both a false positive and a false negative problem.

The false negatives exclude folks like myself who could actually help to improve their codebase with proper, incremental refactorings.

The false positives are the people who have memorized all the leetcode problems. They are hired and write more shitty monolithic hairball code.

Their interviewing process reinforces the shittiness of their codebase. It’s a spiral they might never get out of.

The next time I get one of these, I think I’m going to YOLO it, pull the ripcord early and politely tell them why they’re fucked.

w10-1 9/12/2025||
Yes, it is a death spiral; if you are to lead them, you have to know what to fix when, to avoid making things worse.

The solution is typically not just to fix their code. They got in over their heads by charging ahead and building something they'll regret, but their culture (and likely the interviewer personal self-regard) depends on believing their (current) tech leaders.

So yes, the interviewer is most comfortable if you chase and find the ball they're hiding.

But the leadership question is whether you can relieve them of their ignorance without also stripping their dignity and future prospects.

I've found (mostly with luck) that they often have a sneaking suspicion that something isn't right, but didn't have the tools or pull to isolate and address it. As a leader if you can elicit that, and then show some strategies for doing so, you'll improve them and the code in a way that encourages them that what was hard to them is solvable with you, which helps them rely on you for other knotty problems.

It's not really that you only live once; it's that this opportunity is here now and should have your full attention, and to be a leader you have to address it directly but from everyone's perspective.

Even if you find you'd never want to work with them, you'd still want to leave them feeling clearer about their code and situation.

cobbzilla 9/12/2025||
I agree with everything you've written.

Clarifying my "YOLO" usage: I was being a little flippant, in the sense that when ending an interview early with direct critical feedback, the most likely outcome is a "burned bridge" with that company (you're never coming back).

Which reminds me one of my favorite twisted idioms: We'll burn that bridge when we get to it!

I guess I've finally found an acceptable real-world use case for this phrase :)

fifticon 9/12/2025||
may the bridges I burn light the way.
fern_ 9/12/2025|||
There is something to be said for being senior in a way where the people interviewing you are junior enough that they don't necessarily have the experience to necessarily "click" with the nuance that comes with said problems.

That being said, from a stoicism point of view, the interview ends up becoming a meta-challenge on how you approach a problem that is not necessarily appropriately framed, and how you'd go about doing and/or gently correcting things as well.

And if they're not able to appreciate it, then success! You have found that it is not the right organization for you. No need to burn the door down on the way out, just feel relief in that you dodged a bullet (hopefully).

cobbzilla 9/12/2025||
In a few cases, I really liked the company and what they were doing, got along wonderfully with the hiring manager. Then bombed their leetcode BS.

So when I say I’d politely tell them why they’re fucked, it’s actually out of a genuine desire to help the company.

But you’re right, I’m also thankful that they showed their red flag so visibly, early enough, and I’m happy to not move forward!

anal_reactor 9/12/2025||
Maybe the process works as designed. It's just that "hiring the best developer" isn't necessarily the goal here
cerved 9/12/2025|
MiniZinc is a really great modeling language for constraint programming. Back in August I gave a talk at NordConstNet25 on how we used it to build a product configurator in what's (probably) the worlds largest MiniZinc model

https://pierre-flener.github.io/research/NordConsNet/NordCon...

More comments...