klamike 3 days ago

Great to see optimization on the front page of HN! One thing I love about the book is it's full of really nice figures. If like me you love visualizations, you may enjoy this website I've been working on to visualize linear programming (LP) solvers: https://lpviz.net.

It's by no means polished, but it can be pretty fun to play around with, visualizing how the iterates of different LP algorithms (described in sections 11, 12 of the book) react to changes in the feasible region/objective, by just dragging the vertices/constraints around.

If you go to https://lpviz.net/?demo it will draw a polytope for you, and click around the interface to show off some of the features. I'm constantly chipping away at it in my free time, I welcome any feedback and suggestions!

  • mmaaz 2 days ago

    this is really brilliant!!

    • klamike 2 days ago

      Thanks! I’m glad you enjoyed it. You may also get a kick out of the YouTube hyperlinks in the GitHub readme, especially the advanced methods for establishing convexity one :)

cchianel 2 days ago

Some additional optimization resources (for metaheuristics, where you only have the objective/score function and no derivative):

- "Essentials of Metaheuristics" by Sean Luke https://cs.gmu.edu/~sean/book/metaheuristics/

- "Clever Algorithms" by Jason Brownlee https://cleveralgorithms.com/

Timefold uses the metaheuristic algorithms in these books (Tabu Search, Late Acceptance, Simulated Annealing, etc.) to find near-optimal solutions quickly from a score function (typically defined in a Java stream-like/SQL-like syntax so score calculation can be done incrementally to improve score calculation speed).

You can see simplified diagrams of these algorithms in action in Timefold's docs: https://docs.timefold.ai/timefold-solver/latest/optimization....

Disclosure: I work for Timefold.

  • abhgh 2 days ago

    Timefold looks very interesting. This might be irrelevant but have you looked at stuff like InfoBax [1]?

    [1] https://willieneis.github.io/bax-website/

    • cchianel 2 days ago

      I haven't; from a quick reading, InfoBax is for when you have an expensive function and want to do limited evaluations. Timefold works with cheap functions and does many evaluations. Timefold does this via Constraint Streams, so a function like:

          var score = 0;
          for (var shiftA : solution.getShifts()) {
              for (var shiftB : solution.getShifts()) {
                  if (shiftA != shiftB && shiftA.getEmployee() == shiftB.getEmployee() && shiftA.overlaps(shiftB)) {
                      score -= 1;
                  }
              }
          }
          return score
      
      usually takes shift * shift evaluations of overlaps, we only check the shifts affected by the change (changing it from O(N^2) to O(1) usually).

      That being said, it might be useful for a move selector. I need to give it a more in depth reading.

      • abhgh 2 days ago

        Thanks for the example. Yes, true, this is for expensive functions - to be precise functions that depend on data that is hard to gather, so you interleave the process of computing the value of the function with gathering strategically just as much data as is needed to compute the function value. The video on their page [1] is quite illustrative: calculate shortest path on a graph where the edge weights are expensive to obtain. Note how the edge weights they end up obtaining forms a narrow band around the shortest path they find.

        [1] https://willieneis.github.io/bax-website/

kragen 3 days ago

This is a 521-page CC-licensed book on optimization which looks absolutely fantastic. It starts out with modern gradient-based algorithms rooted in automatic differentiation, including recent things like Adam, rather than the historically more important linear optimization algorithms like the simplex method (the 24-page chapter 12 covers linear optimization). There are a number of chapters on things I haven't even heard of, and, best of all, there are exercises.

I've been wanting something like this for a long time, and I regret not knowing about the first edition.

If you are wondering why this is a more interesting problem than, say, sorting a list, the answer is that optimization algorithms are attempts at the ideal of a fully general problem solver. Instead of writing a program to solve the problem, you write a program to recognize what a solution would look like, which is often much easier, for example with a labeled dataset. Then you apply the optimization algorithm on your program. And that is how current AI is being done, with automatic differentiation and variants of Adam, but there are many other algorithms for optimization which may be better alternatives in some circumstances.

  • energy123 2 days ago

    > ideal of a fully general problem solver

    In practice that's basically the mindset, but full generality isn't technically possible because of the no free lunch theorem.

    • cchianel 2 days ago

      That depends; do you want the optimal solution?

      If so, I agree it is impossible for a fully general problem solver to find the optimal solution to a problem in a reasonable amount of time (unless P = NP, which is unlikely).

      However, if a "good enough" solution that is only 1% worse than optimal works, then a fully general solver can do the job in a reasonable amount of time.

      One such example of a fully general solver is Timefold; you express your constraints using plain old Java objects, so you can in theory do whatever you want in your constraint functions (you can even do network calls, but that is extremely ill-advised since that will drastically slow down score calculation speeds).

      Disclosure: I work for Timefold.

      • kragen 2 days ago

        No, guaranteeing that a solution to a general computational puzzle found in a finite amount of time is within some percentage of optimality is impossible. You must be talking about a restricted class of problems that enjoy some kind of tractability guarantee.

    • kragen 2 days ago

      I was thinking because of Gödel's incompleteness theorem, but maybe there are multiple kinds of full generality. Wolpert and Macready seem to have been thinking about problems that are too open-ended to even be able to write a program to recognize a good solution.

crystal_revenge 3 days ago

This book as well as Kochenderfer's earlier book "Decision Making Under Uncertainty"[0] are some of my favorite technical books (and I wouldn't be surprised to find his newest book, "Algorithms for Decision Making", also fell into this category).

The algorithm descriptions are clear, the visualizations are great, and, as someone who does a lot of ML work, they cover a lot of (important) topics beyond just what is covered in your standard ML book. This is especially refreshing if you're looking for thinking around optimization that is not just gradient descent (which has been basically the only mainstream approach to optimization in ML for two decades now).

I've known a few people to complain that the code examples are in Julia, but, as someone who doesn't know Julia, if you have experience doing quantitative programming at all it should be trivial convert the Julia examples to your favorite language for implementation (and frankly, I'm a bit horrified that so many people "smart" people interested in these sorts of topics seem trapped into reasoning in one specific language).

Optimization is such a rich field and should be of interest to any computer scientist who would describe themselves as "interested in solving hard problems" rather than just applying a well known technique to a specific class of hard problems.

0. https://web.stanford.edu/group/sisl/public/dmu.pdf

  • eru 3 days ago

    > Optimization is such a rich field and should be of interest to any computer scientist who would describe themselves as "interested in solving hard problems" rather than just applying a well known technique to a specific class of hard problems.

    Yes, but even if you are only interested in pragmatically solving problems, off-the-shelf solvers for various optimisation problems are a great toolkit to bring to bear.

    Reformulating your specific problem as eg a mixed integer linear programming problem can often give you a quick baseline of performance. Similar for SMT. It also teaches you not to be afraid of NP. And it teaches you a valuable lesson in separation of specification (= your formulation of the problem) and how to compute the solution (= whatever the solver does), which can be applicable in other domains, too.

  • Brystephor 2 days ago

    What other resources would you recommend for learning about optimizations?

    I use multi armed bandits a lot at work, and I wonder what other algorithms I should look into. Its hard to read the name of an algorithm or category and get a sense of whether or not its applicable. It seems like the general process of "which path out of N options is the best?" can be reframed in many ways depending on contextual details such as how large is N, what's the feedback latency, what are the constraints around evaluation and exploration, etc

  • sghiassy 3 days ago

    Use an LLM to convert the Julia sample code to a language of your choice

marmalade2413 2 days ago

An absolutely fantast book. I've read it cover to cover a couple of times during my PhD (which focused on neural networks and numerical solvers). Gives the right amount of depth to serve as a detailed introduction whilst covering a lot of the key areas in optimisation. I still use this book as a reference years later.

rhines 2 days ago

Thanks for sharing this, looks like a useful overview of the field. I wish the first edition had come out a year or two earlier - this would have been a great resource for my undergrad research work. Back then there were no books covering CMA-ES, surrogate models, and gaussian processes all in one book, everything was scattered across different books and papers, with varying levels of technical depth and differing notations.

fertom00 2 days ago

I'm astonished and also disappointed to see that the book has dedicated sections for the metaheuristics Firefly and Cuckoo Search. I don't know what happened there, but any experienced researcher from the field knows that these metaheuristics (among several others) are not serious and are very criticized in the community.

There is even a paper in ITOR about this: https://onlinelibrary.wiley.com/doi/abs/10.1111/itor.12001

Unfortunately, there are some "research bubbles" in the community of people that only work with these shady metaheuristics and keep citing each other. This is getting so bad to the point that I still frequently see in conferences people using such metaheuristics and believing that they are ok, only to get criticized by someone in the audience later.

sfpotter 3 days ago

Can anyone provide a comparison of this book to Nocedal and Wright's book?

  • owlbite 3 days ago

    This book provides a high level overview of many methods without (on a quick skim) really hinting at the practical usage. Basically this reads as a encyclopedia to me, whereas Nocedal and Wright is more of an introductory graduate course going into significantly more detail on a smaller selection of algorithms (generally those that are more commonly used).

    Picking on what I'd consider one of the major workhorse methods of continous constrained optimization, Interior Point Methods get a 2-3 page super high level summary in this book. Nocedal and Wright give an entire chapter on the topic (~25 pages) (which of course still is probably insufficient detail to implement anything like a competitive solver).

    • ted_dunning 20 hours ago

      It's a bit like the old Numerical Recipes book in that regard.

      (but better)

  • Mageek 2 days ago

    Breadth vs. Depth.

    Alg4Opt covers more topics, providing the motivation behind the algorithm, sometimes a basic derivation, and a concrete implementation. It has citations in the margin for more info.

    Nocedal and Wright will go more in-depth on derivation, proving theorems, etc. Implementations are pseudocode, and fewer topics are covered.

  • imtringued 2 days ago

    Next time mention the name of the book first.

    Numerical Optimization (Jorge Nocedal, Stephen J. Wright).

jonbaer 2 days ago

Thanks for post/link, excellent chapter on multiobjective optimization, any other books/material with that focus?