You can specify solver precision in cvxpy. For your example if you run prob.solve(verbose=True, eps_abs=1e-7, eps_rel=1e-7) OSQP gets the correct answer. By default OSQP solves to tolerances of 1e-4.
You could also call Clarabel and it will solve the problem correctly since its default tolerances are 1e-8. Seems like the issue is that by default OSQP uses tolerances that are extraordinarily loose compared to any IPMs. It also looks like SCS (also a first order solver) by default uses extremely loose tolerances (1e-5) and gets the wrong answer.
Another take away is that you should by default use IPMs which are able to solve to high accuracy.
Thanks! I tried multiple google searches for those parameters and came up empty. Now that you tell me the keywords, I found these parameters buried in a hidden bullet here:
Actually the KKT equations *are* ill conditioned. If you make new variables for q*x and q*y and optimize in terms of those you get a perfectly reasonable answer with cvxpy.
It's an issue of replicability. Reproducibility would be running the code as is, perhaps in a Docker container. Replicability would be running the code after swapping out the optimization solver. These distinctions can be subtle, of course: is it a reproducibility or replicability issue if the code doesn't work when you install a solver update?
You can specify solver precision in cvxpy. For your example if you run prob.solve(verbose=True, eps_abs=1e-7, eps_rel=1e-7) OSQP gets the correct answer. By default OSQP solves to tolerances of 1e-4.
You could also call Clarabel and it will solve the problem correctly since its default tolerances are 1e-8. Seems like the issue is that by default OSQP uses tolerances that are extraordinarily loose compared to any IPMs. It also looks like SCS (also a first order solver) by default uses extremely loose tolerances (1e-5) and gets the wrong answer.
Another take away is that you should by default use IPMs which are able to solve to high accuracy.
Thanks! I tried multiple google searches for those parameters and came up empty. Now that you tell me the keywords, I found these parameters buried in a hidden bullet here:
https://www.cvxpy.org/tutorial/solvers/index.html#setting-solver-options
I tend to agree with you about IPMs except:
(a) Stephen spent several years promoting ADMM as a reasonable alternative for small scale problems. I don't think that's panned out.
(b) I can't explain the last example here, where changing one parameter screws up MOSEK so much.
Actually the KKT equations *are* ill conditioned. If you make new variables for q*x and q*y and optimize in terms of those you get a perfectly reasonable answer with cvxpy.
Can you clarify what definition you are using for ill conditioning?
Isn’t the Hessian q^2 time the identity matrix, which is very close to singular?
> Loosely specified solver precision now adds one more headache to your
> modeling. You have to ensure that no variables get too small with respect to
> other variables in your code. The annoying part is it’s hard to see which ones
> in advance.
Out of curiosity, would you label this behavior as hindering reproducibility or
replicability (or both, or neither)?
It's an issue of replicability. Reproducibility would be running the code as is, perhaps in a Docker container. Replicability would be running the code after swapping out the optimization solver. These distinctions can be subtle, of course: is it a reproducibility or replicability issue if the code doesn't work when you install a solver update?