Comment by energy123
> ideal of a fully general problem solver
In practice that's basically the mindset, but full generality isn't technically possible because of the no free lunch theorem.
> ideal of a fully general problem solver
In practice that's basically the mindset, but full generality isn't technically possible because of the no free lunch theorem.
No, guaranteeing that a solution to a general computational puzzle found in a finite amount of time is within some percentage of optimality is impossible. You must be talking about a restricted class of problems that enjoy some kind of tractability guarantee.
By 1% of optimal, I was giving an example percentage to clarify there are solutions that exists, that are almost as good as optimal, that can be found in reasonable amount of time.
There cannot be a guarantee to find a solution to a given percentage worse than optimal for a fully general problem, since you would need to know optimal to give such a guarantee (and since the problem fully general, you cannot use the structure of the problem to reduce it).
Most constraint problems have many feasible solutions, and have a way to judge how much worse or better one solution is to another.
There are good and bad way to write constraints.
One bad way to write constraints is score traps, where between one clearly better solution has the same score as a clearly worse solution.
For example, for shift scheduling, a solution with only 1 overlapping shift with the same employee is better than a solution with 2 overlapping shifts with the same employee.
A bad score function would penalize both solutions by 1, meaning a solver have no idea which of the two solutions are better.
A good score function would penalize the schedule with 1 overlapping shift with the same employee by 1, and the schedule with 2 overlapping shifts with the same employee by 2.
The class of problems I am talking about is the class of problems where you can assign a score to a possible solution, with limited score traps.
Timefold has no guarantees about finding a solution in reasonable time (but unless you done something terribly wrong or have a truly massive dataset, it finds a good solution really quickly 99.99% of the time). Instead, you set the termination condition of the solver; it could be time-based (say 60 minutes), unimproved time spent (solve until no new better solutions are found after 60 minutes), or the first feasible solution (there are other termination conditions that can be set).
I think from what you say that you are not familiar with the content of the No Free Lunch theorem.
It says that averaging over all possible objective functions that no optimization algorithm is better than median.
If you don't know anything about the objective, then the probability that Timefold is better than a randomly chosen optimization algorithm is 50%.
The key is the point about averaging over all possible objective functions. You don't presumably expect to be even close to successful on such a general set.
I thought "No Free Lunch Theorem" was a joke from its name (although I should know better since "Hairy Ball Theorem" exists).
So if I understand correct, it states that all optimization algorithm must choose an order to evaluate solutions, and the faster it evaluates the "best" solution, the "better" the algorithm.
For example, say the search space is {"a", "b", "c"} and the best solution is "c". Then there are 3! (6) ways to evaluate all solutions:
"a", "b", "c"
"a", "c", "b"
"b", "a", "c"
"b", "c", "a"
"c", "a", "b"
"c", "b", "a"
(of course, in a real problem, the search space is much, MUCH larger).
The way Timefold tackles this is move selectors. In particular, you can design custom moves that uses knowledge of your particular problem which in turn affect when certain solutions are evaluated. Additionally, you can design custom phases that runs your own algorithm and then pass the solution found to local search. One of the projects we are working on currently is the neighborhood API, to make it much easier to write your own custom moves. That being said, for most problems that humans deal with, you don't need to configure Timefold and just work with the defaults (which is best-fit followed by a late acceptance local search with change and swap moves).
For what it's worth: metaheuristics solvers tend to be better than linear solvers for shift scheduling and vehicle routing, but worse for bin packing. But it still works, and is still fast.
I was thinking because of Gödel's incompleteness theorem, but maybe there are multiple kinds of full generality. Wolpert and Macready seem to have been thinking about problems that are too open-ended to even be able to write a program to recognize a good solution.
That depends; do you want the optimal solution?
If so, I agree it is impossible for a fully general problem solver to find the optimal solution to a problem in a reasonable amount of time (unless P = NP, which is unlikely).
However, if a "good enough" solution that is only 1% worse than optimal works, then a fully general solver can do the job in a reasonable amount of time.
One such example of a fully general solver is Timefold; you express your constraints using plain old Java objects, so you can in theory do whatever you want in your constraint functions (you can even do network calls, but that is extremely ill-advised since that will drastically slow down score calculation speeds).
Disclosure: I work for Timefold.