Tracing GC involves either a big performance penalty or a big memory footprint penalty compared to the ownership-based memory management in a language like Rust.
The pervasive use of reference counting that you find in languages like Swift is worse on throughput than typical GC, but can often avoid the memory overhead of GC due to deterministic destruction and typically gives you better worst-case latency, so there isn't a single winner between ARC and GC.
I think that Fil-C, unlike normal C allocators, could use a copying GC instead of a tracing GC, because it isn't exposing the machine pointers to user code. But I don't know if it does. Copying GCs often suffer from high memory overhead, but for example OCaml's GC is pretty frugal with memory, and RC by itself doesn't guarantee good worst-case latency—decrementing a reference to the root of an arbitrarily large tree can have arbitrarily bad latency in RC. You probably already know all of this, but someone else reading the thread may not.
So I'm not sure there isn't a single winner between ARC and GC, but you could be right.
The pervasive use of reference counting that you find in languages like Swift is worse on throughput than typical GC, but can often avoid the memory overhead of GC due to deterministic destruction and typically gives you better worst-case latency, so there isn't a single winner between ARC and GC.