Could someone maybe give a high-level explanation into why commercial ILP solvers (e.g. Gurobi) are that much better than free/open-source ones? Is it because ILP is inherently that difficult to solve (I know it's NP-hard), that the best solvers are just a large ensemble of heuristics for very specific sub-problems and thus no general "good" strategy has made it's way into the public domain?
christina97 · 9h ago
It’s mostly that they work closely with clients in a very hands on way to implement problem-specific speedups. And they’ve been doing this for 10-20 years. In the MILP world this means good heuristics (to find good starting points for branch & bound, and to effectively prune the B&B tree), as well as custom cuts (to cut off fractional solutions in a way that effectively improves the objective and solution integrality).
It’s common that when researchers in Operations Research pick a problem, they can often beat Gurobi and other solvers pretty easily by writing their own cuts & heuristics. The solver companies just do this consistently (by hiring teams of PhDs and researchers) and have a battery of client problems to track improvements and watch for regressions.
cpgxiii · 10h ago
> the best solvers are just a large ensemble of heuristics for very specific sub-problems
The big commercial solvers have the resources (and the clients interested in helping) to have invested a lot of time in tuning everything in their solves to real-world problems. Heuristics are part of that; recognizing simpler sub-problems or approximations that can be fed back into the full problem is also part.
I think a big part is that the OSS solvers are somewhat hamstrung by the combination of several issues: (1) the barrier to entry in SoTA optimizer development is very high, meaning that there are very few researchers/developers capable of usefully contributing both the mathematical and programming needed in the first place, (2) if you are capable of (1), the career paths that make lots money lead you away from OSS contribution, and (3) the nature of OSS projects means that "customers" are unlikely to contribute back to kind of examples, performance data, and/or profiling that is really needed to improve the solvers.
There are some exceptions to (2), although being outside of traditional commercial solver development doesn't guarantee being OSS (e.g. SNOPT, developed at Stanford, is still commercially licensed). A lot of academic solver work happens in the context of particular applications (e.g. Clarabel) and so tends to be more narrowly focused on particular problem classes. A lot of other fields have gotten past this bottleneck by having a large tech company acquire an existing commercial project (e.g. Mujoco) or fund an OSS project as a means of undercutting competitors. There are narrow examples of this for solvers (e.g. Ceres) but I suspect the investment to develop an entire general-purpose solver stack from scratch has been considered prohibitive.
zozbot234 · 10h ago
> solvers are just a large ensemble of heuristics for very specific sub-problems
Isn't that statement trivially applicable to anything NP-Hard (which ILP is, since it's equivalent to SAT)?
fooker · 9h ago
No, good algorithms for NP hard problems can be more than just heuristics.
Modern SAT solvers are a good example of this. CDCL is elegant.
graycat · 5h ago
NP-hard is really hard, but it is hard for (a) polynomial running time, (b) for exact solutions, (c) on worst case problems.
One might suspect that fast enough on specific problems for approximate solutions that still make/save a lot of money might also be welcome. Ah, perhaps not!
E.g., in NYC, two guys had a marketing resource allocation problem, tried simulated annealing, and ran for days before giving up.
They sent me the problem statement via email, and in one week I had the software written and in the next week used the IBM OSL (Optimization Subroutine Library) and some Lagrangian relaxation. In 500 primal-dual iterations with
600,000 variables
40,000 constraints
found a feasible solution within 0.025% of
optimality.
So, I'd solved their problem (for practical purposes, the 0.025% has to count as a solving) for free.
They were so embarrassed they wanted nothing to do with me. We never got to where I set a price for my work.
The problem those two guys had was likely that, if they worked with me, then I would understand their customers and, then, beat the guys and take their customers. There in NYC, that happened a second time.
If a guy is in, say, the auto business, and needs a lawyer, the guy might want the best lawyer but will not fear that the lawyer will enter the auto business as a powerful competitor. Similarly for a good medical doctor.
For an optimization guy saving, say, 5% of the operating costs of a big business, say, $billion in revenue a year, all the management suite will be afraid of the guy getting too much power and work to get him out -- Goal Subordination 101 or just fighting to retain position in the tribe.
After having some grand successes in applied math where other people had the problem but then being afraid that I would be too powerful, I formulated:
If some technical, computing, math, etc. idea you have is so valuable, then start your own business exploiting that idea -- of course, need a suitable business for the idea to be powerful.
lukebuehler · 10h ago
scale and speed. for example, most quant trading firms run huge optimizations as often as possible. open-source solver often cannot even solve the problems (OOM exceptions, etc)
FilosofumRex · 7h ago
In most MILP domains, the underlying engineering know-how is more critical than mathematical formulations or CS coding: (that's why most OR groups operate independently of math or CS departments).
OSS never took off among professional engineers because they've have "skin in the game", unlike math and CS folks who just reboot, and pretend nothing is wrong.
pradn · 1h ago
"... between 1988 and 2004, hardware got 1600 times faster, and LP solvers got 3300 times faster, allowing for a cumulative speed-up factor higher than 5 × 106, and that was already 20 years ago!"
"The authors observed a speedup of 1000 between [the commercial MILP solvers of] 2001 and 2020 (50 due to algorithms, 20 due to faster computers)."
I wonder if we can collect these speedup factors across computing subfields, decomposed by the contribution of algorithmic improvements, and faster computers.
In compilers, there's "Proebsting's Law": compiler advances double computing power every 18 years.
dcminter · 11h ago
I vaguely recall building a resource allocation tool using IBM's "ILOG" mixed integer linear programming library and we realised that if we'd built the tool about 20 years earlier it would have still been running for the same problems we were now solving within 5 minutes.
As I recall it the raw computer power had increased by a factor of around a thousand and the algorithms had improved by about the same, giving us a factor of a million improvement.
Worth pondering when trying to predict the future!
The "resources" in question were diamonds by the way...
tormeh · 8h ago
Can anyone share how this is used in practice? Somehow I imagine implementing numerical optimization often fails due to the usual problems with data-driven approaches (trust, bad data, etc.) and ultimately someone important just decides how things are going to be done based on stomach feel.
jakewins · 8h ago
We use solvers throughout the stack at work: solvers to schedule home batteries and EVs in peoples homes optimally, solvers to schedule hundreds of thousands of those homes optimally as portfolios, solvers to trade that portfolio optimally.
The EU electricity spot price is set each day in a single giant solver run, look up Euphemia for some write ups of how that works.
Most any field where there is a clear goal to optimise and real money on the line will be riddled with solvers
ies7 · 6h ago
FMCG company here. We use these in practice of:
1. Salesman & delivery travel plan
2. Machine, Human and material resource scheduling for production
3. Inventory level for warehouse distribution center. This one isn't fully automatic because demand forecasting is hard
I remember implementing some version of Gomory cutting hyperplanes back in the 1990s in Maple (for learning, not for production.) Looks like the field has progressed...
> if we needed two months of running time to solve an LP in the early 1990s, we would need less than one second today. Recently, Bixby compared the machine-independent performance of two MILP solvers, CPLEX and Gurobi, between 1990 and 2020 and reported speed-ups of almost 4×10^6.
djoldman · 15h ago
I've heard Gurobi is fairly expensive. Anyone willing to share pricing details?
Chio · 14h ago
I can't share pricing details since they are confidential but if you just want to play with MIP you don't need to buy one of the big three (XPRESS, Gurobi, CPLEX) which are all very expensive but usually available for free for students. There are at least two good open source / free for non-commercial use MIP solvers available:
I've used both. They are waaaaaaaaaay faster, waaaay more reliable, and actually have support. You're not going to want to run your product that is responsible for millions off of something without really solid support.
wombatpm · 9h ago
You can get a temporary free license for Gurobi. You are limited to a 1000 node problem size, but you can learn how to use the tool and set up your problem.
If you have a problem that needs Gurobi, it’s worth paying for it. Talk with their sales team. They are happy to help you get started. They know once you know how to use it, and how it can solve problems you will be inclined to use it in the future.
antman · 13h ago
Ortools by google had good benchmarks
RainyDayTmrw · 14h ago
What I've heard - and obviously I can't confirm this - is that their only pricing tier is "call us" - at which point they try to figure out how much money you're making and ask for a slice of it.
edot · 13h ago
It’s much cheaper than making suboptimal decisions slowly. Free solvers are fine for small problems (GLPK, for example), but lots of business problems are pretty much impossible to solve in the timeframe required unless you fork over cash for a premium solver (Gurobi being the best).
7thaccount · 5h ago
The best MIP solvers (CPLEX, GUROBI, FICO) are all extremely expensive unless you're an academic. The free ones are fine for smaller problems. Some like Mosek are quite affordable and a good middle ground. To most organizations, the cost is reasonable for what they're getting.
jwr · 3h ago
I wish somebody in this thread would quantify "extremely expensive". So many messages and we still have no idea how much they cost.
quanto · 12h ago
The last time I checked about a decade ago, a full license with multiple users and on a server was around 100k USD. I don't recall exact number of seats or server count restrictions.
I want to add that, for many in the industry, it is well worth the price.
cschmidt · 12h ago
Gurobi does have a cloud service where you pay by the hour. A full non-academic license is pricy.
almostgotcaught · 13h ago
I don't know why people think it's such a deeply shrouded secret - it's ~10k a seat for a core-limited license.
0cf8612b2e1e · 8h ago
Heh, given all of the whispering, I was imagining something 10x the price. I am a nobody and have at least one license to a different product that is some $13k annual.
__alexs · 11h ago
It's not cheap but actually quite reasonable and the quality is very good vs free solvers. If you are building a product that needs MILP it's worth it.
beret4breakfast · 11h ago
It feels like there’s a significant lack of “ML/AI” based approaches applied to these kinds of problems. I’ve seen a lot of example of RL/GNN papers that do attempt to solve smaller problems but it always feels like the best option is to just pay for a gurobi license and have at it. I’ve been doing some scheduling optimisation recently (close to job shop scheduling) and while there’s some examples of using RL they just don’t seem to cut it. I’ve resorted to evolutionary algorithms to get reasonable solutions to some big problems. Maybe it’s just always more efficient to using OR type approaches when you can formulate the problem well.
7thaccount · 5h ago
It depends on the problem. The security contained unit commitment problem (how you figure out which power plants to turn on when) is an unbelievably complex problem that MILP solvers like Gurobi can find globally optimal solutions (within the bounds of the MIP gap) quickly. Sure you could create a genetic algorithm, but there is no guarantee it will give you an answer that isn't stuck in a local minima. That is assuming you can make it run fast. Neural networks are also going to be sub optimal.
zozbot234 · 10h ago
SAT is a standard GOFAI problem and you can of course use any programming language in the ML family to write a SAT solver. Thus I'd say that "ML/AI" approaches are, if anything, quite applicable!
perching_aix · 15h ago
title could use [pdf] [2024]
gcr · 14h ago
the link does not point to a pdf, it points to an abstract
perching_aix · 14h ago
Unless the OP meant to post specifically the abstract, which I very much doubt, the content submitted is the PDF linked. That said, if that's how the [pdf] tag is meant to be used on this forum, I could understand. Would just also leave me moderately annoyed & wondering why the tag isn't automated then, since that'd be automatable.
gammarator · 12h ago
[pdf] implies that the link directly downloads a pdf.
perching_aix · 11h ago
Then I am hereby indeed officially moderately annoyed & wondering.
layer8 · 9h ago
People can read the abstract and then decide if they want to go deeper and also download and read the PDF. I’m sure that many only read the abstract.
Furthermore, depending on publishing site, a paper may also be available as HTML rendered from the LaTeX source, in addition to PDF. (If the page does not now, it may in the future.)
The purpose of a [PDF] tag is to warn about possible unsuitability of the linked resource for mobile consumption (which isn’t the case for the article page linked here), possible download size (though maybe not anymore, nowadays), and possible brightness shock when using dark mode.
Integer linear programming doesn't sound very complex.
ColinWright · 8h ago
Graph vertex 3-colouring (G3C) is NP and NP-Hard, therefore NP-Complete (NPC).
There is a result that say that if you can solve general ILP problems then you can solve general G3C.
Satisfiability is NP and NP-Hard, therefore NP-Complete (NPC). It is therefore equivalent (under some definition of "equivalent") to G3C.
There is a result that say that if you can solve general ILP problems then you can solve general G3C.
There is a known result that if you can solve arbitrary G3C problems then you can factor integers. While the problem of factoring integers (FAC) is not NPC, clearly factoring integers is very important in today's computing environment.
So if you can solve arbitrary ILP problems you can break several current encryption algorithms that are thought to be secure.
So we can deduce that ILP is a fairly tricky problem to solve.
The thing that fools a lot of people is that random instances of NPC problems tend to be easy. The core of difficult instances gets smaller (in relative terms) as the problems get bigger, so if, say, you pick a random graph, it's probably trivial to find a vertex 3-colouring, or show that none such exists.
tgv · 11h ago
You have to find the integers that fulfill a certain condition best. That's fundamentally different from real numbers. It looks exactly like other numerical problems, but there's no general solution for it, only (very good) heuristics for specific classes.
throwaway81523 · 6h ago
Even continuous linear programming wasn't known to be in P-time until the 1980s or so.
arnsholt · 12h ago
You can encode travelling salesman as an ILP problem, so it’s a pretty tricky problem.
luiwammus · 11h ago
This is actually a pretty poor example because we can solve huge TSP instances to optimality in practice (see the concorde solver). There exist many more tricky Combinatorial problems such as packing or covering problems that are typically much harder to solve to optimality.
robotpepi · 9h ago
The point is the implied np-completeness of integer programming.
CamperBob2 · 11h ago
Or very real.
layer8 · 9h ago
Nor very rational.
thrance · 12h ago
It's harder than linear programming though.
lambdaone · 12h ago
It's substantially harder than linear programming: it's equivalent to SAT, whereas linear programming is merely polynomial-time (and in practice weakly polynomial-time with current algorithms).
firesteelrain · 11h ago
I normally use Simplex method which is fast and not polynomial in the worst case though
It’s common that when researchers in Operations Research pick a problem, they can often beat Gurobi and other solvers pretty easily by writing their own cuts & heuristics. The solver companies just do this consistently (by hiring teams of PhDs and researchers) and have a battery of client problems to track improvements and watch for regressions.
The big commercial solvers have the resources (and the clients interested in helping) to have invested a lot of time in tuning everything in their solves to real-world problems. Heuristics are part of that; recognizing simpler sub-problems or approximations that can be fed back into the full problem is also part.
I think a big part is that the OSS solvers are somewhat hamstrung by the combination of several issues: (1) the barrier to entry in SoTA optimizer development is very high, meaning that there are very few researchers/developers capable of usefully contributing both the mathematical and programming needed in the first place, (2) if you are capable of (1), the career paths that make lots money lead you away from OSS contribution, and (3) the nature of OSS projects means that "customers" are unlikely to contribute back to kind of examples, performance data, and/or profiling that is really needed to improve the solvers.
There are some exceptions to (2), although being outside of traditional commercial solver development doesn't guarantee being OSS (e.g. SNOPT, developed at Stanford, is still commercially licensed). A lot of academic solver work happens in the context of particular applications (e.g. Clarabel) and so tends to be more narrowly focused on particular problem classes. A lot of other fields have gotten past this bottleneck by having a large tech company acquire an existing commercial project (e.g. Mujoco) or fund an OSS project as a means of undercutting competitors. There are narrow examples of this for solvers (e.g. Ceres) but I suspect the investment to develop an entire general-purpose solver stack from scratch has been considered prohibitive.
Isn't that statement trivially applicable to anything NP-Hard (which ILP is, since it's equivalent to SAT)?
Modern SAT solvers are a good example of this. CDCL is elegant.
One might suspect that fast enough on specific problems for approximate solutions that still make/save a lot of money might also be welcome. Ah, perhaps not!
E.g., in NYC, two guys had a marketing resource allocation problem, tried simulated annealing, and ran for days before giving up.
They sent me the problem statement via email, and in one week I had the software written and in the next week used the IBM OSL (Optimization Subroutine Library) and some Lagrangian relaxation. In 500 primal-dual iterations with
600,000 variables
40,000 constraints
found a feasible solution within 0.025% of optimality.
So, I'd solved their problem (for practical purposes, the 0.025% has to count as a solving) for free.
They were so embarrassed they wanted nothing to do with me. We never got to where I set a price for my work.
The problem those two guys had was likely that, if they worked with me, then I would understand their customers and, then, beat the guys and take their customers. There in NYC, that happened a second time.
If a guy is in, say, the auto business, and needs a lawyer, the guy might want the best lawyer but will not fear that the lawyer will enter the auto business as a powerful competitor. Similarly for a good medical doctor.
For an optimization guy saving, say, 5% of the operating costs of a big business, say, $billion in revenue a year, all the management suite will be afraid of the guy getting too much power and work to get him out -- Goal Subordination 101 or just fighting to retain position in the tribe.
After having some grand successes in applied math where other people had the problem but then being afraid that I would be too powerful, I formulated:
If some technical, computing, math, etc. idea you have is so valuable, then start your own business exploiting that idea -- of course, need a suitable business for the idea to be powerful.
OSS never took off among professional engineers because they've have "skin in the game", unlike math and CS folks who just reboot, and pretend nothing is wrong.
"The authors observed a speedup of 1000 between [the commercial MILP solvers of] 2001 and 2020 (50 due to algorithms, 20 due to faster computers)."
I wonder if we can collect these speedup factors across computing subfields, decomposed by the contribution of algorithmic improvements, and faster computers.
In compilers, there's "Proebsting's Law": compiler advances double computing power every 18 years.
As I recall it the raw computer power had increased by a factor of around a thousand and the algorithms had improved by about the same, giving us a factor of a million improvement.
Worth pondering when trying to predict the future!
The "resources" in question were diamonds by the way...
The EU electricity spot price is set each day in a single giant solver run, look up Euphemia for some write ups of how that works.
Most any field where there is a clear goal to optimise and real money on the line will be riddled with solvers
1. Salesman & delivery travel plan
2. Machine, Human and material resource scheduling for production
3. Inventory level for warehouse distribution center. This one isn't fully automatic because demand forecasting is hard
gurobi case studies: https://www.gurobi.com/case_studies/
some cplex case studies: https://www.ibm.com/products/ilog-cplex-optimization-studio/...
hexaly (formerly localsolver) case studies: https://www.hexaly.com/customers
> if we needed two months of running time to solve an LP in the early 1990s, we would need less than one second today. Recently, Bixby compared the machine-independent performance of two MILP solvers, CPLEX and Gurobi, between 1990 and 2020 and reported speed-ups of almost 4×10^6.
https://highs.dev/ https://www.scipopt.org/
If you have a problem that needs Gurobi, it’s worth paying for it. Talk with their sales team. They are happy to help you get started. They know once you know how to use it, and how it can solve problems you will be inclined to use it in the future.
I want to add that, for many in the industry, it is well worth the price.
Furthermore, depending on publishing site, a paper may also be available as HTML rendered from the LaTeX source, in addition to PDF. (If the page does not now, it may in the future.)
The purpose of a [PDF] tag is to warn about possible unsuitability of the linked resource for mobile consumption (which isn’t the case for the article page linked here), possible download size (though maybe not anymore, nowadays), and possible brightness shock when using dark mode.
There is a result that say that if you can solve general ILP problems then you can solve general G3C.
Satisfiability is NP and NP-Hard, therefore NP-Complete (NPC). It is therefore equivalent (under some definition of "equivalent") to G3C.
There is a result that say that if you can solve general ILP problems then you can solve general G3C.
There is a known result that if you can solve arbitrary G3C problems then you can factor integers. While the problem of factoring integers (FAC) is not NPC, clearly factoring integers is very important in today's computing environment.
So if you can solve arbitrary ILP problems you can break several current encryption algorithms that are thought to be secure.
So we can deduce that ILP is a fairly tricky problem to solve.
The thing that fools a lot of people is that random instances of NPC problems tend to be easy. The core of difficult instances gets smaller (in relative terms) as the problems get bigger, so if, say, you pick a random graph, it's probably trivial to find a vertex 3-colouring, or show that none such exists.