It depends on the use case. For high-speed microscopes, I may get a request that says, "we need to support 4.2 Gigabytes/second of continuous ingest for an 18-hour imaging run." - In those situations, it's best to test with realistic data.
For general video and media workloads, it may be something like, "we have to accommodate 40 editors working over 10GbE (2 x 100GbE at the server) and minimize contention while ingesting from these other sources".
I work with iozone to establish a baseline. I also have a "frametest" utility that helps when mimicking some of the video characteristics.