Well historically, personally, I would have run ESRGAN upscaling and other video processing AI models as part of a video encoding pipeline.
Right now I would run/train Stable Diffusion and maybe run Facebook's GPT-like model (that just apparently leaked out from 4chan), and then run clients to mess with them on computers, phones and stuff. But the generative AI space is moving so fast, all that could be obsolete in a month.
There are crazy AI projects for all sorts of stuff (text to speech, "vector" searches on data, meme animation generation from stick figure drawings), and the requirement to run them is generally "a GPU with lots of VRAM"