I am not working in this domain, but why not put some (a lot of) numbers on the claim that it is dramatically faster than spark etc. Maybe show how a 10 hour spark problem can be reduced to minutes.
Frank McSherry .. the one behind the timely dataflow library and materialize.com .. did show many years ago that there is plenty of performance headroom above Spark by showing how one laptop can beat a Spark cluster.
It's easy to get into situations where you're paying massive costs with serialization, deserialization, and network I/O, and I believe graph operations with Spark are one of those situations. I would be curious if running Spark in local mode with a single thread would actually improve the runtime, or if it would reveal other issues with the Spark graph libraries.
Generally memory layout is extremely important for graph problems, even on a single node. As I understand it the Spark approach does not embrace a "flat" layout, but rather does lots of pointer chasing, which can really slow things down. Because Spark isn't very careful about memory usage and layout, you outgrow a single node quite fast, and then you're back to really bad distributed scaling characteristics.
So can a time flow cluster beat a spark data enter? Seriously one shouldn't hide their value points, one has to advertise them. Maybe even put a spreadsheet with costs saved for the decision makers.