If Apple wanted to put an engineering team on solving this problem, they could record all the raw sensor data for the video, with the regular 'auto' settings, then, after the clip is recorded, decide what shutter speed, iso, etc to use, and then reprocess that raw data to simulate what that moment in time would have looked like with a different shutter speed.
I''m sure modern neural nets would do a decent job of simulating what a frame taken with one iso/shutter/focus would look like with a slightly different iso/shutter/focus.
First, I doubt users ask for this though. Those who want it, are going to use a manual videography app like OP. The 99.9% wants a camera that just works.
Second, modern neural nets are good, but not perfect. I can reliably tell if something was shot with real bokeh, or simulated via software. For serious productions like a commercial shoot, nobody wants to change the shutter speed, aperture, etc in shoot: the DP already knows what look they want before they start filming.
I think a neural network could do it. You just train it on a bunch of videos with different shutter speeds, and then you ask it to convert a given video from one speed to another.
I'm sure it would quickly learn to add/remove motion blur on moving things as appropriate.
But in addition to determining motion blur, shutter speed also massively affects which areas of the images are above/below the brightness range the sensor is capable of picking up.
I''m sure modern neural nets would do a decent job of simulating what a frame taken with one iso/shutter/focus would look like with a slightly different iso/shutter/focus.