ColinWright 2 days ago

Graph vertex 3-colouring (G3C) is NP and NP-Hard, therefore NP-Complete (NPC).

There is a result that say that if you can solve general ILP problems then you can solve general G3C.

Satisfiability is NP and NP-Hard, therefore NP-Complete (NPC). It is therefore equivalent (under some definition of "equivalent") to G3C.

There is a result that say that if you can solve general ILP problems then you can solve general G3C.

There is a known result that if you can solve arbitrary G3C problems then you can factor integers. While the problem of factoring integers (FAC) is not NPC, clearly factoring integers is very important in today's computing environment.

So if you can solve arbitrary ILP problems you can break several current encryption algorithms that are thought to be secure.

So we can deduce that ILP is a fairly tricky problem to solve.

The thing that fools a lot of people is that random instances of NPC problems tend to be easy. The core of difficult instances gets smaller (in relative terms) as the problems get bigger, so if, say, you pick a random graph, it's probably trivial to find a vertex 3-colouring, or show that none such exists.

arnsholt 2 days ago

You can encode travelling salesman as an ILP problem, so it’s a pretty tricky problem.

  • luiwammus 2 days ago

    This is actually a pretty poor example because we can solve huge TSP instances to optimality in practice (see the concorde solver). There exist many more tricky Combinatorial problems such as packing or covering problems that are typically much harder to solve to optimality.

    • robotpepi 2 days ago

      The point is the implied np-completeness of integer programming.

      • nullc a day ago

        The np-completeness of it is often not very relevant for practical concerns, as the parent post was mentioning.

        There are many applications with an underlying technically np-complete problem for which optimal solutions are easily found for almost all instances you will encounter. Because they are np-complete it's possible to construct instances which aren't efficiently solvable, but that doesn't mean you'll ever even encounter one in practice.

        Then even beyond that there are plenty of cases where an efficient approximation with bounded error exists, and that bounded error might be arbitrarily small.

        To give an easily understood example, consider a knapsack problem where the items are never larger than 1% of the knapsack size.

        There exists a trivial O(N log N) approximation algorithm: sort by value per size, include in that order until no more fit. This yields an optimal solution when the last selected item exactly fills the sack, and otherwise it yields an approximate solution which is suboptimal by no more than the unfilled space, which is no more than 1% due to the size constraint. And 1% is the worse case approximation error, normally less space will be unused and the lowest value per space will be a fraction of the highest value per space, so the actual average approximation error in a real application may be, say, 0.01%. If some other application constraint meant the item sizes would always add up to the sack size then the simple algorithm always gives an optimal solution.

        Of course, you can do better than the simple algorithm with some search but even the dumbest algorithm gives very good approximations and is sometimes optimal... and yet the general knapsack (even just the decision form) is a problem which is np-complete. This insight can be quite surprising to anyone who assumes np-complete == unsolvable.

        Complexity theory is awesome but it can also lead us astray.

thrance 2 days ago

It's harder than linear programming though.

  • lambdaone 2 days ago

    It's substantially harder than linear programming: it's equivalent to SAT, whereas linear programming is merely polynomial-time (and in practice weakly polynomial-time with current algorithms).

    • firesteelrain 2 days ago

      I normally use Simplex method which is fast and not polynomial in the worst case though

      • sirwhinesalot a day ago

        You can always just run a portfolio of Simplex/Barrier/PDLP and just grab whichever returns something first. The latter two are usually slower but polynomial time, so you always win.

        Can't do that with SAT or ILP.

tgv 2 days ago

You have to find the integers that fulfill a certain condition best. That's fundamentally different from real numbers. It looks exactly like other numerical problems, but there's no general solution for it, only (very good) heuristics for specific classes.

  • throwaway81523 2 days ago

    Even continuous linear programming wasn't known to be in P-time until the 1980s or so.

    • tgv a day ago

      The parent comment doesn't seem to be versed in complexity theory, so I tried to keep it simple.

      But I think the simplex method is much older. Apparently, it dates back to 1947. Why do say it wasn't known until the 80s? Are we talking about different things?

      • throwaway81523 14 hours ago

        As nullc mentioned, the simplex method can take exponential time in some pathological examples (a space of measure 0, if that matters). The first P-time linear programming algo (and proof) was the ellipsoid method of Khachian in the 1980s. I have the impression it wasn't really practical. A later P-time interior-point method by Karmarkar is used sometimes today (it beats simplex on certain types of problems) from what I understand. But, I'm nowhere near expert in this area.

      • nullc a day ago

        The simplex algorithm is exponential in the worst case, in spite of the fact that it performs very well 'on average'.

        • tgv 11 hours ago

          I had always blindly assumed it was polynomial. Thanks.