Also, this tool seems to be geared towards people who already get how PIDs work. It's completely valid assumption to make, I'm just not really in that group.
You are right, but the problem of a possible analogous "Machine Learning Theory" is that machine learning problems are highly nonlinear, and control theory has only been "solved" for linear problems. It we could solve nonlinear control that could immediately be applied to machine learning.
“Linear systems are important because we can solve them.” — Richard Feynman
Fair point. But of course the reasons for the nonlinearities are different. ML is not inherently nonlinear; ML is only nonlinear because linear neural nets are not very useful.
On the other hand linear control theory is quite useful for many real-world problems.
The problem with many adaptive control techniques is that is very difficult to prove the closed loop stability and performance margins, because such systems are highly nonlinear and we do not have yet the mathematics to provide simple guarantees (for example, I would not get into an aircraft where the control systems have not been rigorously, mathematically, designed)
The integral problem of LQR and MPC is indeed not the first topic discussed on textbooks, but there has been a solution for that for year, called "disturbance model". You you can have an "adaptive MPC" following some design procedures. Check out this reference:
Thanks for the reference. I agree that the solution to getting I action is to model the disturbance, and augment the state. The system certainly becomes uncontrollable in the augmented state space.
I've run into a conceptual problem when using this approach with [infinite-horizon] LQR to try to recover what looks like a PID controller, which IIRC is due to the control u not necessarily converging to 0. This is expected, to counteract a constant disturbance, but it means that the sum doesn't converge.
I only skimmed the reference, but I did not find a discussion of this issue.
The trick is that, even though the augmented state is uncontrollable, you still use it in the predictions, so the MPC algorithm can still compensate for it. Take a look at the before-last graph in the paper, see how that technique improves predictions after learning the real-time disturbances.
That's not what I was concerned about. The subspace that's uncontrollable is the disturbance components of the state, which I don't care to control anyway.
That is true, that is why there is a slider in the tool that allows you to trade-off performance for robustness. The bode plot is also an indication on how the closed loop performs in frequency domain, so you would normally choose different tunings according to specific use cases (e.g. some frequency response for tracking and a different one for just regulating)
You know, after I posted I noticed there was a "next" button and started using it (I thought it was GitHub only). You pretty much captured a lot of the tradeoffs. Very nicely, might I add: the UI/UX funneling is clever! I'm off to cram a step function into my flight controller... :)