Comment by LegionMammal978

Comment by LegionMammal978 11 hours ago

12 replies

At least personally, I've been very underwhelmed by their performance when I've tried using them. Usually past a few dozen variables or so is when I start hitting unacceptable exponential runtimes, especially for problem instances that are unsatisfiable or barely-satisfiable. Maybe their optimizations are well-suited for knapsack problems and other classic OR stuff, but if your problem doesn't fit the mold, then it's very hit-or-miss.

cerved 6 hours ago

I've worked on a model with thousands of variables and hundreds of thousands of parameters with a hundred constraints. There are pitfalls you need to avoid, like reification, but it's definitely doable.

Of course, NP hard problems become complex at an exponential rate but that doesn't change if you use another exact solving technique.

Using local-search are very useful for scaling but at the cost of proven optimality

j2kun 10 hours ago

I think this hits the nail on the head: performance is the obstacle, and you can't get good performance without some modeling expertise, which most people don't have.

js8 10 hours ago

I wish I knew better how to use them for these coding problems, because I agree with GP they're underutilized.

But I think if you have constraint problem, that has an efficient algorithm, but chokes a general constraint solver, that should be treated as a bug in the solver. It means that the solver uses bad heuristics, somewhere.

  • LegionMammal978 10 hours ago

    I'm pretty sure that due to Rice's theorem, etc., any finite set of heuristics will always miss some constraint problems that have an efficient solution. There's very rarely a silver bullet when it comes to generic algorithms.

    • sigbottle 9 hours ago

      I think they're saying that the types of counter-examples are so pathological in most cases that if you're doing any kind of auto-generation of constraints - for example, a DSL backed by a solver - should have good enough heuristics.

      Like it might even be the case that certain types of pretty powerful DSLs just never generate "bad structures". I don't know, I've not done research on circuits, but this kind of analysis shows up all the time in other adjacent fields.

      • LegionMammal978 7 hours ago

        Idk, I also thought so once upon the time. "Everyone knows that you can usually do much better than the worst case in NP-hard problems!" But at least for the non-toy problems I've tried using SAT/ILP solvers for, the heuristics don't improve on the exponential worst case much at all. It's seemed like NP-hardness really does meet the all-or-nothing stereotype for some problems.

        Your best bet using them is when you have a large collection of smaller unstructured problems, most of which align with the heuristics.

drob518 11 hours ago

Well, they aren’t magic. You have to use them correctly and apply them to problems that match how they work. Proving something is unsat is worst case NP. These solvers don’t change that.

  • LegionMammal978 10 hours ago

    Of course they aren't magic, but people keep talking about them as if they're perfectly robust and ready-to-use for any problem within their domain. In reality, unless you have lots of experience in how to "use them correctly" (which is not something I think can be taught by rote), you'd be better off restricting their use to precisely the OR/verification problems they're already popular for.

    • drob518 10 hours ago

      Hence my statement about education. All tools must be used correctly in their proper domain, that is true. Don’t try to drive screws with a hammer. But I'm curious what problems you tried them on and found them wanting and what your alternative was? I actually find that custom solutions work better for simple problems and that solvers do a lot better when the problem complexity grows. You’re better off solving the Zebra puzzle and its ilk with brute force code, not a solver, for instance.