It’s not actually that hard and can be done on any modern computer w/ a lot of RAM easily (albeit takes longer than the near instant application on a camera phone photo).
The images also aren’t that large, just high res for a single focal plane image. I have personally taken WAY higher resolution photos than 3.2 gigapixel before using post-process stitching of hundreds (or thousands in one case) of photos. My highest resolution one I’ve taken was from the roof of a building in San Francisco. It was just over 2,000 twenty five megapixel images using a 400mm lens and produced a combined image over 10x higher resolution (Around 42 billion pixels) than this does.
Fun fact, 3.2Gp can still be stored as a single jpeg. The JPEG file format tops out at 4Gp (64k x 64k). PNG can’t go that high generally because it tops out at a 2GB file size (limited by bytes rather than pixels). For the previously mentioned 40Gp image, the file format I had to use was PSB. It’s been almost a decade since I took that pano, but if memory serves the file was around 150GB.
Not that I have such images to try on so I'm just guessing:
3200 megapixels is around 200x bigger than a typical today's photo, so around 200x slower. It'd also still fit in e.g. 10GB of RAM.
Since applying a filter takes way less than a second on a standard photo (well, depends on what kind of filter you mean, let's say an approximated gaussian blur), single threaded, it'd be around a minute on a huge one, and with e.g. 8 or 16 threads, much faster.
In terms of throughput, not a problem at all. With proper GPU computing techniques you can do full color correction and so on at dozens of Gigapixels per second on a single desktop GPU (bound by memory bandwidth).
The problem is more with bandwidth and working memory. For example, if you'd stream the image to a GPU (at 6.4 GB per image, assuming it's greyscale 16 bit), you're just being bottlenecked by PCIe plain and simple. GPU memory sizes aren't favorable to these sizes, either, most models don't have enough memory to have one input and one output buffer (assuming you also want 16 bit out). So, with a single GPU the bus would limit you to around 1-2 pictures per second.
However, the quoted throughput is "30 TB per night", that's only one GB per second. So it's plausible (but unlikely they do) to process all of the data on a single desktop PC with a GPU and a dual 10 GbE NIC card.
one image (sensor) at a time. We have multiple types of image processing we will do, some of which happens immediately and some of which happens several months later.