Afair Karpenter handles that. AWS gives some lead time before taking away spot instances. Karpenter puts a taint on the node and spins up a replacement. The scheduler gently evicts the pods, and nothing is lost.
This does require architecting your services to be shut down tolerant, but that's par for the course. If you're using Kubernetes you've probably already settled on the "[herd, not] cattle, not pets" idea.
> The scheduler gently evicts the pods, and nothing is lost.
If you have heavy duty work being done, it might get kill -9'd after the grace period is surpassed. If you weren't using spot instances you have full control over the grace period.
You can of course handle this too, by breaking up your work in ways where it can be interrupted without losing progress but depending on what you're doing this could be very complicated. Basically the takeaway is you can't blindly start using spot instances, even if your app has been running in Kubernetes successfully for a while.
This does require architecting your services to be shut down tolerant, but that's par for the course. If you're using Kubernetes you've probably already settled on the "[herd, not] cattle, not pets" idea.